00:00:00.001 Started by upstream project "autotest-per-patch" build number 132573 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.090 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.091 The recommended git tool is: git 00:00:00.091 using credential 00000000-0000-0000-0000-000000000002 00:00:00.093 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.180 Fetching changes from the remote Git repository 00:00:00.187 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.249 Using shallow fetch with depth 1 00:00:00.249 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.249 > git --version # timeout=10 00:00:00.291 > git --version # 'git version 2.39.2' 00:00:00.291 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.331 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.331 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.287 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.305 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.321 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:09.321 > git config core.sparsecheckout # timeout=10 00:00:09.336 > git read-tree -mu HEAD # timeout=10 00:00:09.355 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:09.384 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:09.385 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:09.517 [Pipeline] Start of Pipeline 00:00:09.534 [Pipeline] library 00:00:09.536 Loading library shm_lib@master 00:00:09.536 Library shm_lib@master is cached. Copying from home. 00:00:09.566 [Pipeline] node 00:00:09.576 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.587 [Pipeline] { 00:00:09.607 [Pipeline] catchError 00:00:09.609 [Pipeline] { 00:00:09.629 [Pipeline] wrap 00:00:09.644 [Pipeline] { 00:00:09.653 [Pipeline] stage 00:00:09.655 [Pipeline] { (Prologue) 00:00:09.860 [Pipeline] sh 00:00:10.149 + logger -p user.info -t JENKINS-CI 00:00:10.174 [Pipeline] echo 00:00:10.176 Node: CYP9 00:00:10.186 [Pipeline] sh 00:00:10.500 [Pipeline] setCustomBuildProperty 00:00:10.513 [Pipeline] echo 00:00:10.515 Cleanup processes 00:00:10.521 [Pipeline] sh 00:00:10.811 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.811 3550863 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.827 [Pipeline] sh 00:00:11.117 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.117 ++ grep -v 'sudo pgrep' 00:00:11.117 ++ awk '{print $1}' 00:00:11.117 + sudo kill -9 00:00:11.117 + true 00:00:11.134 [Pipeline] cleanWs 00:00:11.145 [WS-CLEANUP] Deleting project workspace... 00:00:11.145 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.153 [WS-CLEANUP] done 00:00:11.158 [Pipeline] setCustomBuildProperty 00:00:11.173 [Pipeline] sh 00:00:11.460 + sudo git config --global --replace-all safe.directory '*' 00:00:11.559 [Pipeline] httpRequest 00:00:12.020 [Pipeline] echo 00:00:12.022 Sorcerer 10.211.164.20 is alive 00:00:12.034 [Pipeline] retry 00:00:12.036 [Pipeline] { 00:00:12.053 [Pipeline] httpRequest 00:00:12.058 HttpMethod: GET 00:00:12.059 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.059 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.081 Response Code: HTTP/1.1 200 OK 00:00:12.082 Success: Status code 200 is in the accepted range: 200,404 00:00:12.082 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.707 [Pipeline] } 00:00:18.727 [Pipeline] // retry 00:00:18.736 [Pipeline] sh 00:00:19.031 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.050 [Pipeline] httpRequest 00:00:19.457 [Pipeline] echo 00:00:19.458 Sorcerer 10.211.164.20 is alive 00:00:19.463 [Pipeline] retry 00:00:19.465 [Pipeline] { 00:00:19.472 [Pipeline] httpRequest 00:00:19.476 HttpMethod: GET 00:00:19.477 URL: http://10.211.164.20/packages/spdk_c25d82eb439cb2d3a69cd1b92f47ccb3bf8c8f01.tar.gz 00:00:19.477 Sending request to url: http://10.211.164.20/packages/spdk_c25d82eb439cb2d3a69cd1b92f47ccb3bf8c8f01.tar.gz 00:00:19.485 Response Code: HTTP/1.1 200 OK 00:00:19.485 Success: Status code 200 is in the accepted range: 200,404 00:00:19.486 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c25d82eb439cb2d3a69cd1b92f47ccb3bf8c8f01.tar.gz 00:02:42.703 [Pipeline] } 00:02:42.726 [Pipeline] // retry 00:02:42.734 [Pipeline] sh 00:02:43.024 + tar --no-same-owner -xf spdk_c25d82eb439cb2d3a69cd1b92f47ccb3bf8c8f01.tar.gz 00:02:46.345 [Pipeline] sh 00:02:46.634 + git -C spdk log --oneline -n5 00:02:46.634 c25d82eb4 test/common: [TEST] Add __test_mapper stub 00:02:46.634 ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:02:46.634 9885e1d29 lib/blob: cluster_sz must be a multiple of PAGE 00:02:46.634 9a6847636 bdev/nvme: Fix spdk_bdev_nvme_create() 00:02:46.634 8bbc7b697 nvmf: Block ctrlr-only admin cmds if NSID is set 00:02:46.647 [Pipeline] } 00:02:46.661 [Pipeline] // stage 00:02:46.670 [Pipeline] stage 00:02:46.672 [Pipeline] { (Prepare) 00:02:46.689 [Pipeline] writeFile 00:02:46.706 [Pipeline] sh 00:02:46.994 + logger -p user.info -t JENKINS-CI 00:02:47.010 [Pipeline] sh 00:02:47.299 + logger -p user.info -t JENKINS-CI 00:02:47.312 [Pipeline] sh 00:02:47.600 + cat autorun-spdk.conf 00:02:47.601 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.601 SPDK_TEST_NVMF=1 00:02:47.601 SPDK_TEST_NVME_CLI=1 00:02:47.601 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:47.601 SPDK_TEST_NVMF_NICS=e810 00:02:47.601 SPDK_TEST_VFIOUSER=1 00:02:47.601 SPDK_RUN_UBSAN=1 00:02:47.601 NET_TYPE=phy 00:02:47.609 RUN_NIGHTLY=0 00:02:47.617 [Pipeline] readFile 00:02:47.652 [Pipeline] withEnv 00:02:47.654 [Pipeline] { 00:02:47.669 [Pipeline] sh 00:02:47.964 + set -ex 00:02:47.964 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:47.964 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:47.964 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.964 ++ SPDK_TEST_NVMF=1 00:02:47.964 ++ SPDK_TEST_NVME_CLI=1 00:02:47.964 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:47.964 ++ SPDK_TEST_NVMF_NICS=e810 00:02:47.964 ++ SPDK_TEST_VFIOUSER=1 00:02:47.964 ++ SPDK_RUN_UBSAN=1 00:02:47.964 ++ NET_TYPE=phy 00:02:47.964 ++ RUN_NIGHTLY=0 00:02:47.964 + case $SPDK_TEST_NVMF_NICS in 00:02:47.964 + DRIVERS=ice 00:02:47.964 + [[ tcp == \r\d\m\a ]] 00:02:47.964 + [[ -n ice ]] 00:02:47.964 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:47.964 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:47.964 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:47.964 rmmod: ERROR: Module irdma is not currently loaded 00:02:47.964 rmmod: ERROR: Module i40iw is not currently loaded 00:02:47.964 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:47.964 + true 00:02:47.964 + for D in $DRIVERS 00:02:47.964 + sudo modprobe ice 00:02:47.964 + exit 0 00:02:47.976 [Pipeline] } 00:02:47.991 [Pipeline] // withEnv 00:02:47.996 [Pipeline] } 00:02:48.010 [Pipeline] // stage 00:02:48.020 [Pipeline] catchError 00:02:48.022 [Pipeline] { 00:02:48.035 [Pipeline] timeout 00:02:48.036 Timeout set to expire in 1 hr 0 min 00:02:48.038 [Pipeline] { 00:02:48.051 [Pipeline] stage 00:02:48.054 [Pipeline] { (Tests) 00:02:48.068 [Pipeline] sh 00:02:48.359 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:48.359 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:48.359 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:48.359 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:48.359 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:48.359 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:48.359 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:48.359 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:48.359 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:48.359 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:48.359 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:48.359 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:48.359 + source /etc/os-release 00:02:48.359 ++ NAME='Fedora Linux' 00:02:48.359 ++ VERSION='39 (Cloud Edition)' 00:02:48.359 ++ ID=fedora 00:02:48.359 ++ VERSION_ID=39 00:02:48.359 ++ VERSION_CODENAME= 00:02:48.359 ++ PLATFORM_ID=platform:f39 00:02:48.359 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:48.359 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:48.359 ++ LOGO=fedora-logo-icon 00:02:48.359 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:48.359 ++ HOME_URL=https://fedoraproject.org/ 00:02:48.359 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:48.359 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:48.359 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:48.359 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:48.359 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:48.359 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:48.359 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:48.359 ++ SUPPORT_END=2024-11-12 00:02:48.359 ++ VARIANT='Cloud Edition' 00:02:48.359 ++ VARIANT_ID=cloud 00:02:48.359 + uname -a 00:02:48.359 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:48.359 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:51.672 Hugepages 00:02:51.672 node hugesize free / total 00:02:51.672 node0 1048576kB 0 / 0 00:02:51.672 node0 2048kB 0 / 0 00:02:51.672 node1 1048576kB 0 / 0 00:02:51.672 node1 2048kB 0 / 0 00:02:51.672 00:02:51.672 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:51.672 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:51.672 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:51.672 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:51.672 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:51.672 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:51.672 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:51.672 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:51.672 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:51.672 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:51.672 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:51.672 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:51.672 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:51.672 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:51.672 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:51.672 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:51.672 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:51.672 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:51.672 + rm -f /tmp/spdk-ld-path 00:02:51.672 + source autorun-spdk.conf 00:02:51.672 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:51.672 ++ SPDK_TEST_NVMF=1 00:02:51.672 ++ SPDK_TEST_NVME_CLI=1 00:02:51.672 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:51.672 ++ SPDK_TEST_NVMF_NICS=e810 00:02:51.672 ++ SPDK_TEST_VFIOUSER=1 00:02:51.672 ++ SPDK_RUN_UBSAN=1 00:02:51.672 ++ NET_TYPE=phy 00:02:51.672 ++ RUN_NIGHTLY=0 00:02:51.672 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:51.672 + [[ -n '' ]] 00:02:51.672 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.672 + for M in /var/spdk/build-*-manifest.txt 00:02:51.672 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:51.672 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:51.672 + for M in /var/spdk/build-*-manifest.txt 00:02:51.672 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:51.672 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:51.672 + for M in /var/spdk/build-*-manifest.txt 00:02:51.672 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:51.672 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:51.672 ++ uname 00:02:51.672 + [[ Linux == \L\i\n\u\x ]] 00:02:51.672 + sudo dmesg -T 00:02:51.672 + sudo dmesg --clear 00:02:51.672 + dmesg_pid=3552445 00:02:51.672 + [[ Fedora Linux == FreeBSD ]] 00:02:51.672 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:51.673 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:51.673 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:51.673 + sudo dmesg -Tw 00:02:51.673 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:51.673 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:51.673 + [[ -x /usr/src/fio-static/fio ]] 00:02:51.673 + export FIO_BIN=/usr/src/fio-static/fio 00:02:51.673 + FIO_BIN=/usr/src/fio-static/fio 00:02:51.673 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:51.673 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:51.673 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:51.673 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:51.673 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:51.673 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:51.673 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:51.673 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:51.673 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:51.673 09:35:07 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:51.673 09:35:07 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:51.673 09:35:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:51.673 09:35:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:51.673 09:35:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:51.673 09:35:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:51.673 09:35:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:51.673 09:35:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:51.673 09:35:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:51.673 09:35:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:51.673 09:35:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:51.673 09:35:07 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:51.673 09:35:07 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:51.954 09:35:07 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:51.954 09:35:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:51.954 09:35:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:51.954 09:35:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:51.954 09:35:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:51.954 09:35:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:51.954 09:35:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.954 09:35:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.954 09:35:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.954 09:35:07 -- paths/export.sh@5 -- $ export PATH 00:02:51.954 09:35:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.954 09:35:07 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:51.954 09:35:07 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:51.954 09:35:07 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732696507.XXXXXX 00:02:51.954 09:35:07 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732696507.CW3GnJ 00:02:51.954 09:35:07 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:51.954 09:35:07 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:51.954 09:35:07 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:51.954 09:35:07 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:51.954 09:35:07 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:51.954 09:35:07 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:51.954 09:35:07 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:51.954 09:35:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.954 09:35:07 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:51.954 09:35:07 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:51.954 09:35:07 -- pm/common@17 -- $ local monitor 00:02:51.954 09:35:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.955 09:35:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.955 09:35:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.955 09:35:07 -- pm/common@21 -- $ date +%s 00:02:51.955 09:35:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.955 09:35:07 -- pm/common@21 -- $ date +%s 00:02:51.955 09:35:07 -- pm/common@25 -- $ sleep 1 00:02:51.955 09:35:07 -- pm/common@21 -- $ date +%s 00:02:51.955 09:35:07 -- pm/common@21 -- $ date +%s 00:02:51.955 09:35:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732696507 00:02:51.955 09:35:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732696507 00:02:51.955 09:35:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732696507 00:02:51.955 09:35:07 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732696507 00:02:51.955 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732696507_collect-vmstat.pm.log 00:02:51.955 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732696507_collect-cpu-load.pm.log 00:02:51.955 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732696507_collect-cpu-temp.pm.log 00:02:51.955 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732696507_collect-bmc-pm.bmc.pm.log 00:02:52.917 09:35:08 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:52.917 09:35:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:52.917 09:35:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:52.917 09:35:08 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:52.917 09:35:08 -- spdk/autobuild.sh@16 -- $ date -u 00:02:52.917 Wed Nov 27 08:35:08 AM UTC 2024 00:02:52.917 09:35:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:52.917 v25.01-pre-237-gc25d82eb4 00:02:52.917 09:35:08 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:52.917 09:35:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:52.917 09:35:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:52.917 09:35:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:52.917 09:35:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:52.917 09:35:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.917 ************************************ 00:02:52.917 START TEST ubsan 00:02:52.917 ************************************ 00:02:52.917 09:35:08 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:52.917 using ubsan 00:02:52.917 00:02:52.917 real 0m0.001s 00:02:52.917 user 0m0.000s 00:02:52.917 sys 0m0.000s 00:02:52.917 09:35:08 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:52.917 09:35:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:52.917 ************************************ 00:02:52.917 END TEST ubsan 00:02:52.917 ************************************ 00:02:52.917 09:35:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:52.917 09:35:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:52.917 09:35:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:52.917 09:35:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:52.917 09:35:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:52.917 09:35:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:52.917 09:35:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:52.917 09:35:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:52.917 09:35:08 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:53.178 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:53.178 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:53.440 Using 'verbs' RDMA provider 00:03:09.304 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:21.547 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:22.383 Creating mk/config.mk...done. 00:03:22.383 Creating mk/cc.flags.mk...done. 00:03:22.383 Type 'make' to build. 00:03:22.383 09:35:37 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:03:22.383 09:35:37 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:22.383 09:35:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:22.383 09:35:37 -- common/autotest_common.sh@10 -- $ set +x 00:03:22.383 ************************************ 00:03:22.383 START TEST make 00:03:22.383 ************************************ 00:03:22.383 09:35:37 make -- common/autotest_common.sh@1129 -- $ make -j144 00:03:22.644 make[1]: Nothing to be done for 'all'. 00:03:24.568 The Meson build system 00:03:24.568 Version: 1.5.0 00:03:24.568 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:24.568 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:24.568 Build type: native build 00:03:24.568 Project name: libvfio-user 00:03:24.568 Project version: 0.0.1 00:03:24.568 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:24.568 C linker for the host machine: cc ld.bfd 2.40-14 00:03:24.568 Host machine cpu family: x86_64 00:03:24.568 Host machine cpu: x86_64 00:03:24.568 Run-time dependency threads found: YES 00:03:24.568 Library dl found: YES 00:03:24.568 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:24.568 Run-time dependency json-c found: YES 0.17 00:03:24.568 Run-time dependency cmocka found: YES 1.1.7 00:03:24.568 Program pytest-3 found: NO 00:03:24.568 Program flake8 found: NO 00:03:24.568 Program misspell-fixer found: NO 00:03:24.568 Program restructuredtext-lint found: NO 00:03:24.568 Program valgrind found: YES (/usr/bin/valgrind) 00:03:24.568 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:24.568 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:24.568 Compiler for C supports arguments -Wwrite-strings: YES 00:03:24.568 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:24.568 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:24.568 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:24.568 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:24.568 Build targets in project: 8 00:03:24.568 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:24.568 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:24.568 00:03:24.568 libvfio-user 0.0.1 00:03:24.568 00:03:24.568 User defined options 00:03:24.568 buildtype : debug 00:03:24.568 default_library: shared 00:03:24.568 libdir : /usr/local/lib 00:03:24.568 00:03:24.568 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:24.568 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:24.830 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:24.830 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:24.830 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:24.830 [4/37] Compiling C object samples/null.p/null.c.o 00:03:24.830 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:24.830 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:24.830 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:24.830 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:24.830 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:24.830 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:24.830 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:24.830 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:24.830 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:24.830 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:24.830 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:24.830 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:24.830 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:24.830 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:24.830 [19/37] Compiling C object samples/server.p/server.c.o 00:03:24.830 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:24.830 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:24.830 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:24.830 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:24.830 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:24.830 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:24.830 [26/37] Compiling C object samples/client.p/client.c.o 00:03:24.830 [27/37] Linking target samples/client 00:03:24.830 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:24.830 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:25.092 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:25.092 [31/37] Linking target test/unit_tests 00:03:25.092 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:25.092 [33/37] Linking target samples/gpio-pci-idio-16 00:03:25.092 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:25.092 [35/37] Linking target samples/server 00:03:25.092 [36/37] Linking target samples/null 00:03:25.092 [37/37] Linking target samples/lspci 00:03:25.092 INFO: autodetecting backend as ninja 00:03:25.092 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:25.354 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:25.615 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:25.615 ninja: no work to do. 00:03:32.203 The Meson build system 00:03:32.203 Version: 1.5.0 00:03:32.203 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:32.203 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:32.203 Build type: native build 00:03:32.203 Program cat found: YES (/usr/bin/cat) 00:03:32.203 Project name: DPDK 00:03:32.203 Project version: 24.03.0 00:03:32.203 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:32.203 C linker for the host machine: cc ld.bfd 2.40-14 00:03:32.203 Host machine cpu family: x86_64 00:03:32.203 Host machine cpu: x86_64 00:03:32.203 Message: ## Building in Developer Mode ## 00:03:32.203 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:32.203 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:32.203 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:32.203 Program python3 found: YES (/usr/bin/python3) 00:03:32.203 Program cat found: YES (/usr/bin/cat) 00:03:32.203 Compiler for C supports arguments -march=native: YES 00:03:32.203 Checking for size of "void *" : 8 00:03:32.203 Checking for size of "void *" : 8 (cached) 00:03:32.203 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:32.203 Library m found: YES 00:03:32.203 Library numa found: YES 00:03:32.203 Has header "numaif.h" : YES 00:03:32.203 Library fdt found: NO 00:03:32.203 Library execinfo found: NO 00:03:32.203 Has header "execinfo.h" : YES 00:03:32.203 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:32.203 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:32.203 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:32.203 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:32.203 Run-time dependency openssl found: YES 3.1.1 00:03:32.203 Run-time dependency libpcap found: YES 1.10.4 00:03:32.203 Has header "pcap.h" with dependency libpcap: YES 00:03:32.203 Compiler for C supports arguments -Wcast-qual: YES 00:03:32.203 Compiler for C supports arguments -Wdeprecated: YES 00:03:32.203 Compiler for C supports arguments -Wformat: YES 00:03:32.203 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:32.203 Compiler for C supports arguments -Wformat-security: NO 00:03:32.203 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:32.203 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:32.203 Compiler for C supports arguments -Wnested-externs: YES 00:03:32.203 Compiler for C supports arguments -Wold-style-definition: YES 00:03:32.203 Compiler for C supports arguments -Wpointer-arith: YES 00:03:32.203 Compiler for C supports arguments -Wsign-compare: YES 00:03:32.203 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:32.203 Compiler for C supports arguments -Wundef: YES 00:03:32.203 Compiler for C supports arguments -Wwrite-strings: YES 00:03:32.203 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:32.203 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:32.203 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:32.203 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:32.203 Program objdump found: YES (/usr/bin/objdump) 00:03:32.203 Compiler for C supports arguments -mavx512f: YES 00:03:32.203 Checking if "AVX512 checking" compiles: YES 00:03:32.203 Fetching value of define "__SSE4_2__" : 1 00:03:32.203 Fetching value of define "__AES__" : 1 00:03:32.203 Fetching value of define "__AVX__" : 1 00:03:32.203 Fetching value of define "__AVX2__" : 1 00:03:32.203 Fetching value of define "__AVX512BW__" : 1 00:03:32.203 Fetching value of define "__AVX512CD__" : 1 00:03:32.203 Fetching value of define "__AVX512DQ__" : 1 00:03:32.203 Fetching value of define "__AVX512F__" : 1 00:03:32.203 Fetching value of define "__AVX512VL__" : 1 00:03:32.203 Fetching value of define "__PCLMUL__" : 1 00:03:32.203 Fetching value of define "__RDRND__" : 1 00:03:32.203 Fetching value of define "__RDSEED__" : 1 00:03:32.203 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:32.203 Fetching value of define "__znver1__" : (undefined) 00:03:32.203 Fetching value of define "__znver2__" : (undefined) 00:03:32.203 Fetching value of define "__znver3__" : (undefined) 00:03:32.203 Fetching value of define "__znver4__" : (undefined) 00:03:32.203 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:32.203 Message: lib/log: Defining dependency "log" 00:03:32.203 Message: lib/kvargs: Defining dependency "kvargs" 00:03:32.203 Message: lib/telemetry: Defining dependency "telemetry" 00:03:32.203 Checking for function "getentropy" : NO 00:03:32.203 Message: lib/eal: Defining dependency "eal" 00:03:32.203 Message: lib/ring: Defining dependency "ring" 00:03:32.203 Message: lib/rcu: Defining dependency "rcu" 00:03:32.203 Message: lib/mempool: Defining dependency "mempool" 00:03:32.203 Message: lib/mbuf: Defining dependency "mbuf" 00:03:32.203 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:32.203 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:32.203 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:32.203 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:32.203 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:32.203 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:32.203 Compiler for C supports arguments -mpclmul: YES 00:03:32.203 Compiler for C supports arguments -maes: YES 00:03:32.203 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:32.203 Compiler for C supports arguments -mavx512bw: YES 00:03:32.203 Compiler for C supports arguments -mavx512dq: YES 00:03:32.203 Compiler for C supports arguments -mavx512vl: YES 00:03:32.203 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:32.203 Compiler for C supports arguments -mavx2: YES 00:03:32.203 Compiler for C supports arguments -mavx: YES 00:03:32.203 Message: lib/net: Defining dependency "net" 00:03:32.203 Message: lib/meter: Defining dependency "meter" 00:03:32.203 Message: lib/ethdev: Defining dependency "ethdev" 00:03:32.203 Message: lib/pci: Defining dependency "pci" 00:03:32.203 Message: lib/cmdline: Defining dependency "cmdline" 00:03:32.203 Message: lib/hash: Defining dependency "hash" 00:03:32.203 Message: lib/timer: Defining dependency "timer" 00:03:32.203 Message: lib/compressdev: Defining dependency "compressdev" 00:03:32.203 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:32.203 Message: lib/dmadev: Defining dependency "dmadev" 00:03:32.203 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:32.203 Message: lib/power: Defining dependency "power" 00:03:32.203 Message: lib/reorder: Defining dependency "reorder" 00:03:32.203 Message: lib/security: Defining dependency "security" 00:03:32.203 Has header "linux/userfaultfd.h" : YES 00:03:32.203 Has header "linux/vduse.h" : YES 00:03:32.203 Message: lib/vhost: Defining dependency "vhost" 00:03:32.203 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:32.203 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:32.203 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:32.203 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:32.203 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:32.203 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:32.203 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:32.203 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:32.203 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:32.203 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:32.203 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:32.203 Configuring doxy-api-html.conf using configuration 00:03:32.203 Configuring doxy-api-man.conf using configuration 00:03:32.203 Program mandb found: YES (/usr/bin/mandb) 00:03:32.203 Program sphinx-build found: NO 00:03:32.203 Configuring rte_build_config.h using configuration 00:03:32.203 Message: 00:03:32.203 ================= 00:03:32.203 Applications Enabled 00:03:32.203 ================= 00:03:32.203 00:03:32.203 apps: 00:03:32.203 00:03:32.203 00:03:32.203 Message: 00:03:32.203 ================= 00:03:32.203 Libraries Enabled 00:03:32.203 ================= 00:03:32.204 00:03:32.204 libs: 00:03:32.204 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:32.204 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:32.204 cryptodev, dmadev, power, reorder, security, vhost, 00:03:32.204 00:03:32.204 Message: 00:03:32.204 =============== 00:03:32.204 Drivers Enabled 00:03:32.204 =============== 00:03:32.204 00:03:32.204 common: 00:03:32.204 00:03:32.204 bus: 00:03:32.204 pci, vdev, 00:03:32.204 mempool: 00:03:32.204 ring, 00:03:32.204 dma: 00:03:32.204 00:03:32.204 net: 00:03:32.204 00:03:32.204 crypto: 00:03:32.204 00:03:32.204 compress: 00:03:32.204 00:03:32.204 vdpa: 00:03:32.204 00:03:32.204 00:03:32.204 Message: 00:03:32.204 ================= 00:03:32.204 Content Skipped 00:03:32.204 ================= 00:03:32.204 00:03:32.204 apps: 00:03:32.204 dumpcap: explicitly disabled via build config 00:03:32.204 graph: explicitly disabled via build config 00:03:32.204 pdump: explicitly disabled via build config 00:03:32.204 proc-info: explicitly disabled via build config 00:03:32.204 test-acl: explicitly disabled via build config 00:03:32.204 test-bbdev: explicitly disabled via build config 00:03:32.204 test-cmdline: explicitly disabled via build config 00:03:32.204 test-compress-perf: explicitly disabled via build config 00:03:32.204 test-crypto-perf: explicitly disabled via build config 00:03:32.204 test-dma-perf: explicitly disabled via build config 00:03:32.204 test-eventdev: explicitly disabled via build config 00:03:32.204 test-fib: explicitly disabled via build config 00:03:32.204 test-flow-perf: explicitly disabled via build config 00:03:32.204 test-gpudev: explicitly disabled via build config 00:03:32.204 test-mldev: explicitly disabled via build config 00:03:32.204 test-pipeline: explicitly disabled via build config 00:03:32.204 test-pmd: explicitly disabled via build config 00:03:32.204 test-regex: explicitly disabled via build config 00:03:32.204 test-sad: explicitly disabled via build config 00:03:32.204 test-security-perf: explicitly disabled via build config 00:03:32.204 00:03:32.204 libs: 00:03:32.204 argparse: explicitly disabled via build config 00:03:32.204 metrics: explicitly disabled via build config 00:03:32.204 acl: explicitly disabled via build config 00:03:32.204 bbdev: explicitly disabled via build config 00:03:32.204 bitratestats: explicitly disabled via build config 00:03:32.204 bpf: explicitly disabled via build config 00:03:32.204 cfgfile: explicitly disabled via build config 00:03:32.204 distributor: explicitly disabled via build config 00:03:32.204 efd: explicitly disabled via build config 00:03:32.204 eventdev: explicitly disabled via build config 00:03:32.204 dispatcher: explicitly disabled via build config 00:03:32.204 gpudev: explicitly disabled via build config 00:03:32.204 gro: explicitly disabled via build config 00:03:32.204 gso: explicitly disabled via build config 00:03:32.204 ip_frag: explicitly disabled via build config 00:03:32.204 jobstats: explicitly disabled via build config 00:03:32.204 latencystats: explicitly disabled via build config 00:03:32.204 lpm: explicitly disabled via build config 00:03:32.204 member: explicitly disabled via build config 00:03:32.204 pcapng: explicitly disabled via build config 00:03:32.204 rawdev: explicitly disabled via build config 00:03:32.204 regexdev: explicitly disabled via build config 00:03:32.204 mldev: explicitly disabled via build config 00:03:32.204 rib: explicitly disabled via build config 00:03:32.204 sched: explicitly disabled via build config 00:03:32.204 stack: explicitly disabled via build config 00:03:32.204 ipsec: explicitly disabled via build config 00:03:32.204 pdcp: explicitly disabled via build config 00:03:32.204 fib: explicitly disabled via build config 00:03:32.204 port: explicitly disabled via build config 00:03:32.204 pdump: explicitly disabled via build config 00:03:32.204 table: explicitly disabled via build config 00:03:32.204 pipeline: explicitly disabled via build config 00:03:32.204 graph: explicitly disabled via build config 00:03:32.204 node: explicitly disabled via build config 00:03:32.204 00:03:32.204 drivers: 00:03:32.204 common/cpt: not in enabled drivers build config 00:03:32.204 common/dpaax: not in enabled drivers build config 00:03:32.204 common/iavf: not in enabled drivers build config 00:03:32.204 common/idpf: not in enabled drivers build config 00:03:32.204 common/ionic: not in enabled drivers build config 00:03:32.204 common/mvep: not in enabled drivers build config 00:03:32.204 common/octeontx: not in enabled drivers build config 00:03:32.204 bus/auxiliary: not in enabled drivers build config 00:03:32.204 bus/cdx: not in enabled drivers build config 00:03:32.204 bus/dpaa: not in enabled drivers build config 00:03:32.204 bus/fslmc: not in enabled drivers build config 00:03:32.204 bus/ifpga: not in enabled drivers build config 00:03:32.204 bus/platform: not in enabled drivers build config 00:03:32.204 bus/uacce: not in enabled drivers build config 00:03:32.204 bus/vmbus: not in enabled drivers build config 00:03:32.204 common/cnxk: not in enabled drivers build config 00:03:32.204 common/mlx5: not in enabled drivers build config 00:03:32.204 common/nfp: not in enabled drivers build config 00:03:32.204 common/nitrox: not in enabled drivers build config 00:03:32.204 common/qat: not in enabled drivers build config 00:03:32.204 common/sfc_efx: not in enabled drivers build config 00:03:32.204 mempool/bucket: not in enabled drivers build config 00:03:32.204 mempool/cnxk: not in enabled drivers build config 00:03:32.204 mempool/dpaa: not in enabled drivers build config 00:03:32.204 mempool/dpaa2: not in enabled drivers build config 00:03:32.204 mempool/octeontx: not in enabled drivers build config 00:03:32.204 mempool/stack: not in enabled drivers build config 00:03:32.204 dma/cnxk: not in enabled drivers build config 00:03:32.204 dma/dpaa: not in enabled drivers build config 00:03:32.204 dma/dpaa2: not in enabled drivers build config 00:03:32.204 dma/hisilicon: not in enabled drivers build config 00:03:32.204 dma/idxd: not in enabled drivers build config 00:03:32.204 dma/ioat: not in enabled drivers build config 00:03:32.204 dma/skeleton: not in enabled drivers build config 00:03:32.204 net/af_packet: not in enabled drivers build config 00:03:32.204 net/af_xdp: not in enabled drivers build config 00:03:32.204 net/ark: not in enabled drivers build config 00:03:32.204 net/atlantic: not in enabled drivers build config 00:03:32.204 net/avp: not in enabled drivers build config 00:03:32.204 net/axgbe: not in enabled drivers build config 00:03:32.204 net/bnx2x: not in enabled drivers build config 00:03:32.204 net/bnxt: not in enabled drivers build config 00:03:32.204 net/bonding: not in enabled drivers build config 00:03:32.204 net/cnxk: not in enabled drivers build config 00:03:32.204 net/cpfl: not in enabled drivers build config 00:03:32.204 net/cxgbe: not in enabled drivers build config 00:03:32.204 net/dpaa: not in enabled drivers build config 00:03:32.204 net/dpaa2: not in enabled drivers build config 00:03:32.204 net/e1000: not in enabled drivers build config 00:03:32.204 net/ena: not in enabled drivers build config 00:03:32.204 net/enetc: not in enabled drivers build config 00:03:32.204 net/enetfec: not in enabled drivers build config 00:03:32.204 net/enic: not in enabled drivers build config 00:03:32.204 net/failsafe: not in enabled drivers build config 00:03:32.204 net/fm10k: not in enabled drivers build config 00:03:32.204 net/gve: not in enabled drivers build config 00:03:32.204 net/hinic: not in enabled drivers build config 00:03:32.204 net/hns3: not in enabled drivers build config 00:03:32.204 net/i40e: not in enabled drivers build config 00:03:32.204 net/iavf: not in enabled drivers build config 00:03:32.204 net/ice: not in enabled drivers build config 00:03:32.204 net/idpf: not in enabled drivers build config 00:03:32.204 net/igc: not in enabled drivers build config 00:03:32.204 net/ionic: not in enabled drivers build config 00:03:32.204 net/ipn3ke: not in enabled drivers build config 00:03:32.204 net/ixgbe: not in enabled drivers build config 00:03:32.204 net/mana: not in enabled drivers build config 00:03:32.204 net/memif: not in enabled drivers build config 00:03:32.204 net/mlx4: not in enabled drivers build config 00:03:32.204 net/mlx5: not in enabled drivers build config 00:03:32.204 net/mvneta: not in enabled drivers build config 00:03:32.204 net/mvpp2: not in enabled drivers build config 00:03:32.204 net/netvsc: not in enabled drivers build config 00:03:32.204 net/nfb: not in enabled drivers build config 00:03:32.204 net/nfp: not in enabled drivers build config 00:03:32.204 net/ngbe: not in enabled drivers build config 00:03:32.204 net/null: not in enabled drivers build config 00:03:32.204 net/octeontx: not in enabled drivers build config 00:03:32.204 net/octeon_ep: not in enabled drivers build config 00:03:32.204 net/pcap: not in enabled drivers build config 00:03:32.204 net/pfe: not in enabled drivers build config 00:03:32.204 net/qede: not in enabled drivers build config 00:03:32.204 net/ring: not in enabled drivers build config 00:03:32.204 net/sfc: not in enabled drivers build config 00:03:32.204 net/softnic: not in enabled drivers build config 00:03:32.204 net/tap: not in enabled drivers build config 00:03:32.204 net/thunderx: not in enabled drivers build config 00:03:32.204 net/txgbe: not in enabled drivers build config 00:03:32.204 net/vdev_netvsc: not in enabled drivers build config 00:03:32.204 net/vhost: not in enabled drivers build config 00:03:32.204 net/virtio: not in enabled drivers build config 00:03:32.204 net/vmxnet3: not in enabled drivers build config 00:03:32.204 raw/*: missing internal dependency, "rawdev" 00:03:32.204 crypto/armv8: not in enabled drivers build config 00:03:32.204 crypto/bcmfs: not in enabled drivers build config 00:03:32.204 crypto/caam_jr: not in enabled drivers build config 00:03:32.204 crypto/ccp: not in enabled drivers build config 00:03:32.204 crypto/cnxk: not in enabled drivers build config 00:03:32.204 crypto/dpaa_sec: not in enabled drivers build config 00:03:32.204 crypto/dpaa2_sec: not in enabled drivers build config 00:03:32.204 crypto/ipsec_mb: not in enabled drivers build config 00:03:32.204 crypto/mlx5: not in enabled drivers build config 00:03:32.204 crypto/mvsam: not in enabled drivers build config 00:03:32.204 crypto/nitrox: not in enabled drivers build config 00:03:32.204 crypto/null: not in enabled drivers build config 00:03:32.204 crypto/octeontx: not in enabled drivers build config 00:03:32.204 crypto/openssl: not in enabled drivers build config 00:03:32.204 crypto/scheduler: not in enabled drivers build config 00:03:32.204 crypto/uadk: not in enabled drivers build config 00:03:32.204 crypto/virtio: not in enabled drivers build config 00:03:32.204 compress/isal: not in enabled drivers build config 00:03:32.204 compress/mlx5: not in enabled drivers build config 00:03:32.204 compress/nitrox: not in enabled drivers build config 00:03:32.204 compress/octeontx: not in enabled drivers build config 00:03:32.204 compress/zlib: not in enabled drivers build config 00:03:32.204 regex/*: missing internal dependency, "regexdev" 00:03:32.204 ml/*: missing internal dependency, "mldev" 00:03:32.204 vdpa/ifc: not in enabled drivers build config 00:03:32.204 vdpa/mlx5: not in enabled drivers build config 00:03:32.204 vdpa/nfp: not in enabled drivers build config 00:03:32.204 vdpa/sfc: not in enabled drivers build config 00:03:32.204 event/*: missing internal dependency, "eventdev" 00:03:32.204 baseband/*: missing internal dependency, "bbdev" 00:03:32.204 gpu/*: missing internal dependency, "gpudev" 00:03:32.204 00:03:32.204 00:03:32.204 Build targets in project: 84 00:03:32.204 00:03:32.204 DPDK 24.03.0 00:03:32.204 00:03:32.204 User defined options 00:03:32.204 buildtype : debug 00:03:32.204 default_library : shared 00:03:32.204 libdir : lib 00:03:32.204 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:32.204 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:32.204 c_link_args : 00:03:32.204 cpu_instruction_set: native 00:03:32.204 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:32.204 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:32.204 enable_docs : false 00:03:32.204 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:32.204 enable_kmods : false 00:03:32.204 max_lcores : 128 00:03:32.204 tests : false 00:03:32.204 00:03:32.204 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:32.204 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:32.204 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:32.204 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:32.204 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:32.204 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:32.204 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:32.204 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:32.204 [7/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:32.204 [8/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:32.204 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:32.204 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:32.204 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:32.204 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:32.204 [13/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:32.204 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:32.204 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:32.204 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:32.204 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:32.204 [18/267] Linking static target lib/librte_kvargs.a 00:03:32.204 [19/267] Linking static target lib/librte_log.a 00:03:32.204 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:32.204 [21/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:32.204 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:32.204 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:32.204 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:32.204 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:32.204 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:32.204 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:32.204 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:32.204 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:32.204 [30/267] Linking static target lib/librte_pci.a 00:03:32.204 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:32.204 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:32.462 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:32.462 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:32.462 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:32.462 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:32.462 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:32.462 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:32.462 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:32.462 [40/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:32.462 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.462 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:32.462 [43/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:32.462 [44/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.721 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:32.721 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:32.721 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:32.721 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:32.721 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:32.721 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:32.721 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:32.721 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:32.721 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:32.721 [54/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:32.721 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:32.721 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:32.721 [57/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:32.721 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:32.721 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:32.721 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:32.721 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:32.721 [62/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:32.721 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:32.721 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:32.721 [65/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:32.721 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:32.721 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:32.721 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:32.721 [69/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:32.721 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:32.721 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:32.721 [72/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:32.721 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:32.721 [74/267] Linking static target lib/librte_meter.a 00:03:32.721 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:32.721 [76/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:32.721 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:32.721 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:32.721 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:32.721 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:32.721 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:32.722 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:32.722 [83/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:32.722 [84/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:32.722 [85/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:32.722 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:32.722 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:32.722 [88/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:32.722 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:32.722 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:32.722 [91/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:32.722 [92/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:32.722 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:32.722 [94/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:32.722 [95/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:32.722 [96/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:32.722 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:32.722 [98/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:32.722 [99/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:32.722 [100/267] Linking static target lib/librte_telemetry.a 00:03:32.722 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:32.722 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:32.722 [103/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:32.722 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:32.722 [105/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:32.722 [106/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:32.722 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:32.722 [108/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:32.722 [109/267] Linking static target lib/librte_ring.a 00:03:32.722 [110/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:32.722 [111/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:32.722 [112/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:32.722 [113/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:32.722 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:32.722 [115/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:32.722 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:32.722 [117/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:32.722 [118/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:32.722 [119/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:32.722 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:32.722 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:32.722 [122/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:32.722 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:32.722 [124/267] Linking static target lib/librte_timer.a 00:03:32.722 [125/267] Linking static target lib/librte_cmdline.a 00:03:32.722 [126/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:32.722 [127/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:32.722 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:32.722 [129/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:32.722 [130/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:32.722 [131/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:32.722 [132/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:32.722 [133/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:32.722 [134/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.722 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:32.722 [136/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:32.722 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:32.722 [138/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:32.722 [139/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:32.722 [140/267] Linking static target lib/librte_dmadev.a 00:03:32.722 [141/267] Linking static target lib/librte_net.a 00:03:32.722 [142/267] Linking static target lib/librte_mempool.a 00:03:32.722 [143/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:32.722 [144/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:32.722 [145/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:32.722 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:32.722 [147/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:32.722 [148/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:32.722 [149/267] Linking static target lib/librte_rcu.a 00:03:32.722 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:32.722 [151/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:32.722 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:32.722 [153/267] Linking static target lib/librte_compressdev.a 00:03:32.722 [154/267] Linking target lib/librte_log.so.24.1 00:03:32.983 [155/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:32.983 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:32.983 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:32.983 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:32.983 [159/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:32.983 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:32.983 [161/267] Linking static target lib/librte_power.a 00:03:32.983 [162/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:32.983 [163/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:32.983 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:32.983 [165/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:32.983 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:32.983 [167/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:32.983 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:32.983 [169/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:32.983 [170/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:32.983 [171/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:32.983 [172/267] Linking static target lib/librte_reorder.a 00:03:32.983 [173/267] Linking static target lib/librte_security.a 00:03:32.983 [174/267] Linking static target lib/librte_eal.a 00:03:32.983 [175/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:32.983 [176/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:32.983 [177/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:32.983 [178/267] Linking static target lib/librte_mbuf.a 00:03:32.983 [179/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:32.983 [180/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.983 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:32.983 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:32.983 [183/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:32.983 [184/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:32.983 [185/267] Linking target lib/librte_kvargs.so.24.1 00:03:32.983 [186/267] Linking static target drivers/librte_bus_vdev.a 00:03:32.983 [187/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:32.983 [188/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:32.983 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:32.983 [190/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.983 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:32.983 [192/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:33.244 [193/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:33.244 [194/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:33.244 [195/267] Linking static target drivers/librte_bus_pci.a 00:03:33.244 [196/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.244 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:33.244 [198/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:33.244 [199/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:33.244 [200/267] Linking static target lib/librte_hash.a 00:03:33.244 [201/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.244 [202/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:33.244 [203/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.244 [204/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:33.244 [205/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:33.244 [206/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:33.244 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:33.244 [208/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.244 [209/267] Linking static target drivers/librte_mempool_ring.a 00:03:33.244 [210/267] Linking static target lib/librte_cryptodev.a 00:03:33.504 [211/267] Linking target lib/librte_telemetry.so.24.1 00:03:33.504 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.504 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.504 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:33.504 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.504 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.765 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.765 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:33.765 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:33.765 [220/267] Linking static target lib/librte_ethdev.a 00:03:33.765 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.024 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.025 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.025 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.025 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.284 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.853 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:34.853 [228/267] Linking static target lib/librte_vhost.a 00:03:35.797 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.181 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.762 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.705 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.705 [233/267] Linking target lib/librte_eal.so.24.1 00:03:44.705 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:44.705 [235/267] Linking target lib/librte_ring.so.24.1 00:03:44.705 [236/267] Linking target lib/librte_meter.so.24.1 00:03:44.705 [237/267] Linking target lib/librte_pci.so.24.1 00:03:44.705 [238/267] Linking target lib/librte_timer.so.24.1 00:03:44.705 [239/267] Linking target lib/librte_dmadev.so.24.1 00:03:44.705 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:44.966 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:44.966 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:44.966 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:44.966 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:44.967 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:44.967 [246/267] Linking target lib/librte_rcu.so.24.1 00:03:44.967 [247/267] Linking target lib/librte_mempool.so.24.1 00:03:44.967 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:44.967 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:44.967 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:45.227 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:45.227 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:45.227 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:45.227 [254/267] Linking target lib/librte_compressdev.so.24.1 00:03:45.227 [255/267] Linking target lib/librte_reorder.so.24.1 00:03:45.227 [256/267] Linking target lib/librte_net.so.24.1 00:03:45.227 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:03:45.488 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:45.488 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:45.488 [260/267] Linking target lib/librte_hash.so.24.1 00:03:45.488 [261/267] Linking target lib/librte_cmdline.so.24.1 00:03:45.488 [262/267] Linking target lib/librte_security.so.24.1 00:03:45.488 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:45.488 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:45.488 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:45.748 [266/267] Linking target lib/librte_power.so.24.1 00:03:45.748 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:45.748 INFO: autodetecting backend as ninja 00:03:45.748 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:48.315 CC lib/ut/ut.o 00:03:48.315 CC lib/ut_mock/mock.o 00:03:48.315 CC lib/log/log.o 00:03:48.315 CC lib/log/log_flags.o 00:03:48.315 CC lib/log/log_deprecated.o 00:03:48.315 LIB libspdk_ut.a 00:03:48.315 LIB libspdk_ut_mock.a 00:03:48.315 LIB libspdk_log.a 00:03:48.315 SO libspdk_ut.so.2.0 00:03:48.315 SO libspdk_ut_mock.so.6.0 00:03:48.315 SO libspdk_log.so.7.1 00:03:48.315 SYMLINK libspdk_ut.so 00:03:48.315 SYMLINK libspdk_ut_mock.so 00:03:48.315 SYMLINK libspdk_log.so 00:03:48.575 CXX lib/trace_parser/trace.o 00:03:48.575 CC lib/util/base64.o 00:03:48.575 CC lib/util/bit_array.o 00:03:48.575 CC lib/util/cpuset.o 00:03:48.575 CC lib/util/crc16.o 00:03:48.575 CC lib/ioat/ioat.o 00:03:48.575 CC lib/util/crc32.o 00:03:48.575 CC lib/dma/dma.o 00:03:48.575 CC lib/util/crc32c.o 00:03:48.575 CC lib/util/crc32_ieee.o 00:03:48.575 CC lib/util/crc64.o 00:03:48.575 CC lib/util/dif.o 00:03:48.575 CC lib/util/fd.o 00:03:48.575 CC lib/util/fd_group.o 00:03:48.575 CC lib/util/file.o 00:03:48.575 CC lib/util/hexlify.o 00:03:48.575 CC lib/util/iov.o 00:03:48.575 CC lib/util/math.o 00:03:48.575 CC lib/util/net.o 00:03:48.575 CC lib/util/pipe.o 00:03:48.575 CC lib/util/strerror_tls.o 00:03:48.575 CC lib/util/string.o 00:03:48.575 CC lib/util/uuid.o 00:03:48.575 CC lib/util/xor.o 00:03:48.575 CC lib/util/zipf.o 00:03:48.575 CC lib/util/md5.o 00:03:48.837 LIB libspdk_dma.a 00:03:48.837 CC lib/vfio_user/host/vfio_user_pci.o 00:03:48.837 CC lib/vfio_user/host/vfio_user.o 00:03:48.837 SO libspdk_dma.so.5.0 00:03:48.837 SYMLINK libspdk_dma.so 00:03:48.837 LIB libspdk_ioat.a 00:03:48.837 SO libspdk_ioat.so.7.0 00:03:48.837 SYMLINK libspdk_ioat.so 00:03:49.098 LIB libspdk_vfio_user.a 00:03:49.098 SO libspdk_vfio_user.so.5.0 00:03:49.098 LIB libspdk_util.a 00:03:49.098 SYMLINK libspdk_vfio_user.so 00:03:49.098 SO libspdk_util.so.10.1 00:03:49.358 SYMLINK libspdk_util.so 00:03:49.358 LIB libspdk_trace_parser.a 00:03:49.358 SO libspdk_trace_parser.so.6.0 00:03:49.618 SYMLINK libspdk_trace_parser.so 00:03:49.618 CC lib/json/json_parse.o 00:03:49.618 CC lib/json/json_util.o 00:03:49.618 CC lib/json/json_write.o 00:03:49.618 CC lib/conf/conf.o 00:03:49.618 CC lib/vmd/vmd.o 00:03:49.618 CC lib/idxd/idxd.o 00:03:49.618 CC lib/rdma_utils/rdma_utils.o 00:03:49.618 CC lib/vmd/led.o 00:03:49.618 CC lib/idxd/idxd_user.o 00:03:49.618 CC lib/idxd/idxd_kernel.o 00:03:49.618 CC lib/env_dpdk/env.o 00:03:49.618 CC lib/env_dpdk/memory.o 00:03:49.618 CC lib/env_dpdk/pci.o 00:03:49.618 CC lib/env_dpdk/init.o 00:03:49.618 CC lib/env_dpdk/threads.o 00:03:49.618 CC lib/env_dpdk/pci_ioat.o 00:03:49.618 CC lib/env_dpdk/pci_virtio.o 00:03:49.618 CC lib/env_dpdk/pci_vmd.o 00:03:49.618 CC lib/env_dpdk/pci_idxd.o 00:03:49.618 CC lib/env_dpdk/pci_event.o 00:03:49.618 CC lib/env_dpdk/sigbus_handler.o 00:03:49.618 CC lib/env_dpdk/pci_dpdk.o 00:03:49.618 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:49.618 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:49.878 LIB libspdk_conf.a 00:03:49.879 SO libspdk_conf.so.6.0 00:03:49.879 LIB libspdk_json.a 00:03:49.879 LIB libspdk_rdma_utils.a 00:03:50.140 SO libspdk_rdma_utils.so.1.0 00:03:50.140 SO libspdk_json.so.6.0 00:03:50.140 SYMLINK libspdk_conf.so 00:03:50.140 SYMLINK libspdk_rdma_utils.so 00:03:50.140 SYMLINK libspdk_json.so 00:03:50.140 LIB libspdk_idxd.a 00:03:50.402 SO libspdk_idxd.so.12.1 00:03:50.402 LIB libspdk_vmd.a 00:03:50.402 SO libspdk_vmd.so.6.0 00:03:50.402 SYMLINK libspdk_idxd.so 00:03:50.402 SYMLINK libspdk_vmd.so 00:03:50.402 CC lib/rdma_provider/common.o 00:03:50.402 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:50.402 CC lib/jsonrpc/jsonrpc_server.o 00:03:50.402 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:50.402 CC lib/jsonrpc/jsonrpc_client.o 00:03:50.402 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:50.664 LIB libspdk_rdma_provider.a 00:03:50.664 LIB libspdk_jsonrpc.a 00:03:50.665 SO libspdk_rdma_provider.so.7.0 00:03:50.665 SO libspdk_jsonrpc.so.6.0 00:03:50.926 SYMLINK libspdk_rdma_provider.so 00:03:50.926 SYMLINK libspdk_jsonrpc.so 00:03:50.926 LIB libspdk_env_dpdk.a 00:03:50.926 SO libspdk_env_dpdk.so.15.1 00:03:51.188 SYMLINK libspdk_env_dpdk.so 00:03:51.188 CC lib/rpc/rpc.o 00:03:51.450 LIB libspdk_rpc.a 00:03:51.450 SO libspdk_rpc.so.6.0 00:03:51.450 SYMLINK libspdk_rpc.so 00:03:52.022 CC lib/keyring/keyring.o 00:03:52.022 CC lib/keyring/keyring_rpc.o 00:03:52.022 CC lib/trace/trace.o 00:03:52.022 CC lib/notify/notify.o 00:03:52.022 CC lib/trace/trace_flags.o 00:03:52.022 CC lib/notify/notify_rpc.o 00:03:52.022 CC lib/trace/trace_rpc.o 00:03:52.022 LIB libspdk_notify.a 00:03:52.022 SO libspdk_notify.so.6.0 00:03:52.022 LIB libspdk_keyring.a 00:03:52.022 LIB libspdk_trace.a 00:03:52.285 SO libspdk_keyring.so.2.0 00:03:52.285 SO libspdk_trace.so.11.0 00:03:52.285 SYMLINK libspdk_notify.so 00:03:52.285 SYMLINK libspdk_keyring.so 00:03:52.285 SYMLINK libspdk_trace.so 00:03:52.547 CC lib/sock/sock.o 00:03:52.547 CC lib/sock/sock_rpc.o 00:03:52.547 CC lib/thread/thread.o 00:03:52.547 CC lib/thread/iobuf.o 00:03:53.204 LIB libspdk_sock.a 00:03:53.204 SO libspdk_sock.so.10.0 00:03:53.204 SYMLINK libspdk_sock.so 00:03:53.605 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:53.605 CC lib/nvme/nvme_ctrlr.o 00:03:53.605 CC lib/nvme/nvme_fabric.o 00:03:53.605 CC lib/nvme/nvme_ns_cmd.o 00:03:53.605 CC lib/nvme/nvme_ns.o 00:03:53.605 CC lib/nvme/nvme_pcie_common.o 00:03:53.605 CC lib/nvme/nvme_pcie.o 00:03:53.605 CC lib/nvme/nvme_qpair.o 00:03:53.605 CC lib/nvme/nvme.o 00:03:53.605 CC lib/nvme/nvme_quirks.o 00:03:53.605 CC lib/nvme/nvme_transport.o 00:03:53.605 CC lib/nvme/nvme_discovery.o 00:03:53.605 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:53.605 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:53.605 CC lib/nvme/nvme_tcp.o 00:03:53.605 CC lib/nvme/nvme_opal.o 00:03:53.605 CC lib/nvme/nvme_io_msg.o 00:03:53.605 CC lib/nvme/nvme_poll_group.o 00:03:53.605 CC lib/nvme/nvme_zns.o 00:03:53.605 CC lib/nvme/nvme_stubs.o 00:03:53.605 CC lib/nvme/nvme_auth.o 00:03:53.605 CC lib/nvme/nvme_cuse.o 00:03:53.605 CC lib/nvme/nvme_vfio_user.o 00:03:53.605 CC lib/nvme/nvme_rdma.o 00:03:53.867 LIB libspdk_thread.a 00:03:54.128 SO libspdk_thread.so.11.0 00:03:54.128 SYMLINK libspdk_thread.so 00:03:54.389 CC lib/accel/accel.o 00:03:54.389 CC lib/accel/accel_rpc.o 00:03:54.389 CC lib/accel/accel_sw.o 00:03:54.389 CC lib/blob/blobstore.o 00:03:54.389 CC lib/blob/request.o 00:03:54.389 CC lib/blob/zeroes.o 00:03:54.390 CC lib/blob/blob_bs_dev.o 00:03:54.390 CC lib/init/subsystem.o 00:03:54.390 CC lib/init/json_config.o 00:03:54.390 CC lib/init/subsystem_rpc.o 00:03:54.390 CC lib/init/rpc.o 00:03:54.390 CC lib/virtio/virtio.o 00:03:54.390 CC lib/vfu_tgt/tgt_endpoint.o 00:03:54.390 CC lib/fsdev/fsdev.o 00:03:54.390 CC lib/virtio/virtio_vhost_user.o 00:03:54.390 CC lib/vfu_tgt/tgt_rpc.o 00:03:54.390 CC lib/virtio/virtio_vfio_user.o 00:03:54.390 CC lib/virtio/virtio_pci.o 00:03:54.390 CC lib/fsdev/fsdev_io.o 00:03:54.390 CC lib/fsdev/fsdev_rpc.o 00:03:54.651 LIB libspdk_init.a 00:03:54.912 SO libspdk_init.so.6.0 00:03:54.912 LIB libspdk_vfu_tgt.a 00:03:54.912 LIB libspdk_virtio.a 00:03:54.912 SYMLINK libspdk_init.so 00:03:54.912 SO libspdk_vfu_tgt.so.3.0 00:03:54.912 SO libspdk_virtio.so.7.0 00:03:54.912 SYMLINK libspdk_vfu_tgt.so 00:03:54.912 SYMLINK libspdk_virtio.so 00:03:55.174 LIB libspdk_fsdev.a 00:03:55.174 SO libspdk_fsdev.so.2.0 00:03:55.174 CC lib/event/app.o 00:03:55.174 CC lib/event/reactor.o 00:03:55.174 CC lib/event/log_rpc.o 00:03:55.174 CC lib/event/app_rpc.o 00:03:55.174 CC lib/event/scheduler_static.o 00:03:55.174 SYMLINK libspdk_fsdev.so 00:03:55.436 LIB libspdk_accel.a 00:03:55.436 LIB libspdk_nvme.a 00:03:55.436 SO libspdk_accel.so.16.0 00:03:55.697 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:55.697 SYMLINK libspdk_accel.so 00:03:55.697 SO libspdk_nvme.so.15.0 00:03:55.697 LIB libspdk_event.a 00:03:55.697 SO libspdk_event.so.14.0 00:03:55.697 SYMLINK libspdk_event.so 00:03:55.957 SYMLINK libspdk_nvme.so 00:03:55.957 CC lib/bdev/bdev.o 00:03:55.957 CC lib/bdev/bdev_rpc.o 00:03:55.957 CC lib/bdev/bdev_zone.o 00:03:55.957 CC lib/bdev/part.o 00:03:55.957 CC lib/bdev/scsi_nvme.o 00:03:56.218 LIB libspdk_fuse_dispatcher.a 00:03:56.218 SO libspdk_fuse_dispatcher.so.1.0 00:03:56.218 SYMLINK libspdk_fuse_dispatcher.so 00:03:57.161 LIB libspdk_blob.a 00:03:57.161 SO libspdk_blob.so.12.0 00:03:57.161 SYMLINK libspdk_blob.so 00:03:57.732 CC lib/lvol/lvol.o 00:03:57.732 CC lib/blobfs/blobfs.o 00:03:57.732 CC lib/blobfs/tree.o 00:03:58.304 LIB libspdk_bdev.a 00:03:58.304 SO libspdk_bdev.so.17.0 00:03:58.304 LIB libspdk_blobfs.a 00:03:58.304 SYMLINK libspdk_bdev.so 00:03:58.570 SO libspdk_blobfs.so.11.0 00:03:58.570 LIB libspdk_lvol.a 00:03:58.570 SO libspdk_lvol.so.11.0 00:03:58.570 SYMLINK libspdk_blobfs.so 00:03:58.570 SYMLINK libspdk_lvol.so 00:03:58.832 CC lib/nvmf/ctrlr.o 00:03:58.832 CC lib/nbd/nbd.o 00:03:58.832 CC lib/scsi/dev.o 00:03:58.832 CC lib/nvmf/ctrlr_discovery.o 00:03:58.832 CC lib/nbd/nbd_rpc.o 00:03:58.832 CC lib/scsi/lun.o 00:03:58.832 CC lib/nvmf/ctrlr_bdev.o 00:03:58.832 CC lib/nvmf/subsystem.o 00:03:58.832 CC lib/scsi/port.o 00:03:58.832 CC lib/nvmf/nvmf.o 00:03:58.832 CC lib/scsi/scsi.o 00:03:58.832 CC lib/nvmf/nvmf_rpc.o 00:03:58.832 CC lib/scsi/scsi_bdev.o 00:03:58.832 CC lib/nvmf/transport.o 00:03:58.832 CC lib/nvmf/tcp.o 00:03:58.832 CC lib/nvmf/stubs.o 00:03:58.832 CC lib/scsi/scsi_pr.o 00:03:58.832 CC lib/scsi/scsi_rpc.o 00:03:58.832 CC lib/nvmf/mdns_server.o 00:03:58.832 CC lib/ublk/ublk.o 00:03:58.832 CC lib/ftl/ftl_core.o 00:03:58.832 CC lib/nvmf/vfio_user.o 00:03:58.832 CC lib/ublk/ublk_rpc.o 00:03:58.832 CC lib/ftl/ftl_init.o 00:03:58.832 CC lib/scsi/task.o 00:03:58.832 CC lib/nvmf/rdma.o 00:03:58.832 CC lib/nvmf/auth.o 00:03:58.832 CC lib/ftl/ftl_layout.o 00:03:58.832 CC lib/ftl/ftl_debug.o 00:03:58.832 CC lib/ftl/ftl_io.o 00:03:58.832 CC lib/ftl/ftl_sb.o 00:03:58.832 CC lib/ftl/ftl_l2p.o 00:03:58.832 CC lib/ftl/ftl_l2p_flat.o 00:03:58.832 CC lib/ftl/ftl_nv_cache.o 00:03:58.832 CC lib/ftl/ftl_band.o 00:03:58.832 CC lib/ftl/ftl_band_ops.o 00:03:58.832 CC lib/ftl/ftl_writer.o 00:03:58.832 CC lib/ftl/ftl_rq.o 00:03:58.832 CC lib/ftl/ftl_reloc.o 00:03:58.832 CC lib/ftl/ftl_l2p_cache.o 00:03:58.832 CC lib/ftl/ftl_p2l.o 00:03:58.832 CC lib/ftl/ftl_p2l_log.o 00:03:58.832 CC lib/ftl/mngt/ftl_mngt.o 00:03:58.832 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:58.832 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:58.832 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:58.832 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:58.832 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:58.832 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:58.832 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:58.832 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:58.832 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:58.832 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:58.832 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:58.832 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:58.832 CC lib/ftl/utils/ftl_conf.o 00:03:58.832 CC lib/ftl/utils/ftl_md.o 00:03:58.832 CC lib/ftl/utils/ftl_mempool.o 00:03:58.832 CC lib/ftl/utils/ftl_bitmap.o 00:03:58.832 CC lib/ftl/utils/ftl_property.o 00:03:58.832 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:58.832 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:58.832 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:58.832 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:58.832 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:58.833 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:58.833 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:58.833 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:58.833 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:58.833 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:58.833 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:58.833 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:58.833 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:58.833 CC lib/ftl/base/ftl_base_dev.o 00:03:58.833 CC lib/ftl/base/ftl_base_bdev.o 00:03:58.833 CC lib/ftl/ftl_trace.o 00:03:59.404 LIB libspdk_nbd.a 00:03:59.404 SO libspdk_nbd.so.7.0 00:03:59.666 LIB libspdk_scsi.a 00:03:59.666 SYMLINK libspdk_nbd.so 00:03:59.666 SO libspdk_scsi.so.9.0 00:03:59.666 SYMLINK libspdk_scsi.so 00:03:59.666 LIB libspdk_ublk.a 00:03:59.666 SO libspdk_ublk.so.3.0 00:03:59.928 SYMLINK libspdk_ublk.so 00:03:59.928 CC lib/iscsi/conn.o 00:03:59.928 CC lib/iscsi/init_grp.o 00:03:59.928 CC lib/iscsi/iscsi.o 00:03:59.928 CC lib/iscsi/param.o 00:03:59.928 CC lib/iscsi/portal_grp.o 00:03:59.928 CC lib/iscsi/tgt_node.o 00:03:59.928 CC lib/iscsi/iscsi_subsystem.o 00:03:59.928 CC lib/iscsi/iscsi_rpc.o 00:03:59.928 CC lib/iscsi/task.o 00:03:59.929 CC lib/vhost/vhost.o 00:03:59.929 CC lib/vhost/vhost_rpc.o 00:03:59.929 CC lib/vhost/vhost_scsi.o 00:03:59.929 CC lib/vhost/vhost_blk.o 00:03:59.929 CC lib/vhost/rte_vhost_user.o 00:04:00.190 LIB libspdk_ftl.a 00:04:00.190 SO libspdk_ftl.so.9.0 00:04:00.452 SYMLINK libspdk_ftl.so 00:04:01.025 LIB libspdk_nvmf.a 00:04:01.025 SO libspdk_nvmf.so.20.0 00:04:01.025 LIB libspdk_vhost.a 00:04:01.025 SO libspdk_vhost.so.8.0 00:04:01.286 SYMLINK libspdk_nvmf.so 00:04:01.286 SYMLINK libspdk_vhost.so 00:04:01.286 LIB libspdk_iscsi.a 00:04:01.286 SO libspdk_iscsi.so.8.0 00:04:01.550 SYMLINK libspdk_iscsi.so 00:04:02.124 CC module/env_dpdk/env_dpdk_rpc.o 00:04:02.124 CC module/vfu_device/vfu_virtio.o 00:04:02.124 CC module/vfu_device/vfu_virtio_blk.o 00:04:02.124 CC module/vfu_device/vfu_virtio_scsi.o 00:04:02.124 CC module/vfu_device/vfu_virtio_rpc.o 00:04:02.124 CC module/vfu_device/vfu_virtio_fs.o 00:04:02.124 CC module/blob/bdev/blob_bdev.o 00:04:02.386 LIB libspdk_env_dpdk_rpc.a 00:04:02.386 CC module/scheduler/gscheduler/gscheduler.o 00:04:02.386 CC module/accel/dsa/accel_dsa.o 00:04:02.386 CC module/accel/dsa/accel_dsa_rpc.o 00:04:02.386 CC module/accel/error/accel_error.o 00:04:02.386 CC module/sock/posix/posix.o 00:04:02.386 CC module/accel/error/accel_error_rpc.o 00:04:02.386 CC module/keyring/linux/keyring.o 00:04:02.386 CC module/keyring/linux/keyring_rpc.o 00:04:02.386 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:02.386 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:02.386 CC module/fsdev/aio/fsdev_aio.o 00:04:02.386 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:02.386 CC module/fsdev/aio/linux_aio_mgr.o 00:04:02.386 CC module/keyring/file/keyring.o 00:04:02.386 CC module/keyring/file/keyring_rpc.o 00:04:02.386 CC module/accel/iaa/accel_iaa.o 00:04:02.386 CC module/accel/iaa/accel_iaa_rpc.o 00:04:02.386 CC module/accel/ioat/accel_ioat.o 00:04:02.386 CC module/accel/ioat/accel_ioat_rpc.o 00:04:02.386 SO libspdk_env_dpdk_rpc.so.6.0 00:04:02.386 SYMLINK libspdk_env_dpdk_rpc.so 00:04:02.386 LIB libspdk_scheduler_gscheduler.a 00:04:02.386 SO libspdk_scheduler_gscheduler.so.4.0 00:04:02.386 LIB libspdk_scheduler_dpdk_governor.a 00:04:02.386 LIB libspdk_keyring_linux.a 00:04:02.386 LIB libspdk_keyring_file.a 00:04:02.647 SO libspdk_keyring_file.so.2.0 00:04:02.647 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:02.647 LIB libspdk_accel_error.a 00:04:02.647 SO libspdk_keyring_linux.so.1.0 00:04:02.647 LIB libspdk_scheduler_dynamic.a 00:04:02.647 LIB libspdk_accel_iaa.a 00:04:02.647 LIB libspdk_accel_ioat.a 00:04:02.647 SYMLINK libspdk_scheduler_gscheduler.so 00:04:02.647 LIB libspdk_blob_bdev.a 00:04:02.647 SO libspdk_accel_iaa.so.3.0 00:04:02.647 SO libspdk_accel_error.so.2.0 00:04:02.647 SO libspdk_scheduler_dynamic.so.4.0 00:04:02.647 SO libspdk_accel_ioat.so.6.0 00:04:02.647 SYMLINK libspdk_keyring_file.so 00:04:02.647 LIB libspdk_accel_dsa.a 00:04:02.647 SO libspdk_blob_bdev.so.12.0 00:04:02.647 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:02.647 SYMLINK libspdk_keyring_linux.so 00:04:02.647 SO libspdk_accel_dsa.so.5.0 00:04:02.647 SYMLINK libspdk_accel_iaa.so 00:04:02.647 SYMLINK libspdk_accel_ioat.so 00:04:02.647 SYMLINK libspdk_scheduler_dynamic.so 00:04:02.647 SYMLINK libspdk_accel_error.so 00:04:02.647 SYMLINK libspdk_blob_bdev.so 00:04:02.647 LIB libspdk_vfu_device.a 00:04:02.647 SYMLINK libspdk_accel_dsa.so 00:04:02.647 SO libspdk_vfu_device.so.3.0 00:04:02.908 SYMLINK libspdk_vfu_device.so 00:04:02.908 LIB libspdk_fsdev_aio.a 00:04:02.908 SO libspdk_fsdev_aio.so.1.0 00:04:02.908 LIB libspdk_sock_posix.a 00:04:03.169 SO libspdk_sock_posix.so.6.0 00:04:03.169 SYMLINK libspdk_fsdev_aio.so 00:04:03.169 SYMLINK libspdk_sock_posix.so 00:04:03.169 CC module/bdev/delay/vbdev_delay.o 00:04:03.169 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:03.169 CC module/bdev/error/vbdev_error.o 00:04:03.169 CC module/blobfs/bdev/blobfs_bdev.o 00:04:03.169 CC module/bdev/lvol/vbdev_lvol.o 00:04:03.169 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:03.169 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:03.169 CC module/bdev/error/vbdev_error_rpc.o 00:04:03.169 CC module/bdev/split/vbdev_split.o 00:04:03.169 CC module/bdev/split/vbdev_split_rpc.o 00:04:03.169 CC module/bdev/ftl/bdev_ftl.o 00:04:03.169 CC module/bdev/null/bdev_null.o 00:04:03.169 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:03.169 CC module/bdev/null/bdev_null_rpc.o 00:04:03.169 CC module/bdev/gpt/gpt.o 00:04:03.169 CC module/bdev/aio/bdev_aio.o 00:04:03.169 CC module/bdev/aio/bdev_aio_rpc.o 00:04:03.169 CC module/bdev/gpt/vbdev_gpt.o 00:04:03.169 CC module/bdev/raid/bdev_raid.o 00:04:03.169 CC module/bdev/raid/bdev_raid_rpc.o 00:04:03.169 CC module/bdev/malloc/bdev_malloc.o 00:04:03.169 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:03.169 CC module/bdev/raid/bdev_raid_sb.o 00:04:03.169 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:03.169 CC module/bdev/passthru/vbdev_passthru.o 00:04:03.169 CC module/bdev/nvme/bdev_nvme.o 00:04:03.169 CC module/bdev/raid/raid0.o 00:04:03.169 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:03.169 CC module/bdev/raid/concat.o 00:04:03.169 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:03.169 CC module/bdev/raid/raid1.o 00:04:03.169 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:03.169 CC module/bdev/nvme/nvme_rpc.o 00:04:03.169 CC module/bdev/nvme/bdev_mdns_client.o 00:04:03.169 CC module/bdev/nvme/vbdev_opal.o 00:04:03.169 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:03.169 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:03.169 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:03.169 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:03.169 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:03.169 CC module/bdev/iscsi/bdev_iscsi.o 00:04:03.169 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:03.429 LIB libspdk_blobfs_bdev.a 00:04:03.689 SO libspdk_blobfs_bdev.so.6.0 00:04:03.689 LIB libspdk_bdev_split.a 00:04:03.689 LIB libspdk_bdev_error.a 00:04:03.689 SO libspdk_bdev_split.so.6.0 00:04:03.689 LIB libspdk_bdev_null.a 00:04:03.689 LIB libspdk_bdev_gpt.a 00:04:03.689 SYMLINK libspdk_blobfs_bdev.so 00:04:03.689 SO libspdk_bdev_error.so.6.0 00:04:03.689 LIB libspdk_bdev_ftl.a 00:04:03.689 SO libspdk_bdev_null.so.6.0 00:04:03.689 SO libspdk_bdev_gpt.so.6.0 00:04:03.689 LIB libspdk_bdev_delay.a 00:04:03.689 LIB libspdk_bdev_aio.a 00:04:03.689 SYMLINK libspdk_bdev_split.so 00:04:03.689 LIB libspdk_bdev_passthru.a 00:04:03.689 LIB libspdk_bdev_zone_block.a 00:04:03.689 SO libspdk_bdev_ftl.so.6.0 00:04:03.689 SYMLINK libspdk_bdev_error.so 00:04:03.689 SO libspdk_bdev_delay.so.6.0 00:04:03.689 SO libspdk_bdev_aio.so.6.0 00:04:03.690 SO libspdk_bdev_zone_block.so.6.0 00:04:03.690 SYMLINK libspdk_bdev_null.so 00:04:03.690 SO libspdk_bdev_passthru.so.6.0 00:04:03.690 SYMLINK libspdk_bdev_gpt.so 00:04:03.690 LIB libspdk_bdev_malloc.a 00:04:03.690 LIB libspdk_bdev_iscsi.a 00:04:03.690 SYMLINK libspdk_bdev_ftl.so 00:04:03.690 SYMLINK libspdk_bdev_delay.so 00:04:03.690 SYMLINK libspdk_bdev_zone_block.so 00:04:03.690 SO libspdk_bdev_iscsi.so.6.0 00:04:03.690 SO libspdk_bdev_malloc.so.6.0 00:04:03.690 SYMLINK libspdk_bdev_aio.so 00:04:03.949 SYMLINK libspdk_bdev_passthru.so 00:04:03.949 LIB libspdk_bdev_lvol.a 00:04:03.949 SYMLINK libspdk_bdev_iscsi.so 00:04:03.949 LIB libspdk_bdev_virtio.a 00:04:03.949 SYMLINK libspdk_bdev_malloc.so 00:04:03.949 SO libspdk_bdev_lvol.so.6.0 00:04:03.949 SO libspdk_bdev_virtio.so.6.0 00:04:03.949 SYMLINK libspdk_bdev_lvol.so 00:04:03.949 SYMLINK libspdk_bdev_virtio.so 00:04:04.208 LIB libspdk_bdev_raid.a 00:04:04.469 SO libspdk_bdev_raid.so.6.0 00:04:04.469 SYMLINK libspdk_bdev_raid.so 00:04:05.855 LIB libspdk_bdev_nvme.a 00:04:05.855 SO libspdk_bdev_nvme.so.7.1 00:04:05.855 SYMLINK libspdk_bdev_nvme.so 00:04:06.428 CC module/event/subsystems/iobuf/iobuf.o 00:04:06.428 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:06.428 CC module/event/subsystems/vmd/vmd.o 00:04:06.428 CC module/event/subsystems/sock/sock.o 00:04:06.428 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:06.428 CC module/event/subsystems/keyring/keyring.o 00:04:06.428 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:06.428 CC module/event/subsystems/scheduler/scheduler.o 00:04:06.428 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:06.428 CC module/event/subsystems/fsdev/fsdev.o 00:04:06.689 LIB libspdk_event_sock.a 00:04:06.689 LIB libspdk_event_scheduler.a 00:04:06.689 LIB libspdk_event_vfu_tgt.a 00:04:06.689 LIB libspdk_event_keyring.a 00:04:06.689 LIB libspdk_event_iobuf.a 00:04:06.689 LIB libspdk_event_vmd.a 00:04:06.689 LIB libspdk_event_vhost_blk.a 00:04:06.689 LIB libspdk_event_fsdev.a 00:04:06.690 SO libspdk_event_vfu_tgt.so.3.0 00:04:06.690 SO libspdk_event_sock.so.5.0 00:04:06.690 SO libspdk_event_scheduler.so.4.0 00:04:06.690 SO libspdk_event_keyring.so.1.0 00:04:06.690 SO libspdk_event_iobuf.so.3.0 00:04:06.690 SO libspdk_event_vhost_blk.so.3.0 00:04:06.690 SO libspdk_event_fsdev.so.1.0 00:04:06.690 SO libspdk_event_vmd.so.6.0 00:04:06.690 SYMLINK libspdk_event_keyring.so 00:04:06.690 SYMLINK libspdk_event_vfu_tgt.so 00:04:06.690 SYMLINK libspdk_event_scheduler.so 00:04:06.951 SYMLINK libspdk_event_sock.so 00:04:06.951 SYMLINK libspdk_event_vhost_blk.so 00:04:06.951 SYMLINK libspdk_event_iobuf.so 00:04:06.951 SYMLINK libspdk_event_fsdev.so 00:04:06.951 SYMLINK libspdk_event_vmd.so 00:04:07.212 CC module/event/subsystems/accel/accel.o 00:04:07.212 LIB libspdk_event_accel.a 00:04:07.473 SO libspdk_event_accel.so.6.0 00:04:07.473 SYMLINK libspdk_event_accel.so 00:04:07.733 CC module/event/subsystems/bdev/bdev.o 00:04:07.993 LIB libspdk_event_bdev.a 00:04:07.993 SO libspdk_event_bdev.so.6.0 00:04:07.993 SYMLINK libspdk_event_bdev.so 00:04:08.564 CC module/event/subsystems/ublk/ublk.o 00:04:08.564 CC module/event/subsystems/nbd/nbd.o 00:04:08.564 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:08.564 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:08.564 CC module/event/subsystems/scsi/scsi.o 00:04:08.564 LIB libspdk_event_nbd.a 00:04:08.564 LIB libspdk_event_ublk.a 00:04:08.564 LIB libspdk_event_scsi.a 00:04:08.564 SO libspdk_event_nbd.so.6.0 00:04:08.564 SO libspdk_event_ublk.so.3.0 00:04:08.564 SO libspdk_event_scsi.so.6.0 00:04:08.825 LIB libspdk_event_nvmf.a 00:04:08.825 SYMLINK libspdk_event_nbd.so 00:04:08.825 SYMLINK libspdk_event_scsi.so 00:04:08.825 SYMLINK libspdk_event_ublk.so 00:04:08.825 SO libspdk_event_nvmf.so.6.0 00:04:08.825 SYMLINK libspdk_event_nvmf.so 00:04:09.086 CC module/event/subsystems/iscsi/iscsi.o 00:04:09.086 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:09.347 LIB libspdk_event_vhost_scsi.a 00:04:09.347 LIB libspdk_event_iscsi.a 00:04:09.347 SO libspdk_event_vhost_scsi.so.3.0 00:04:09.347 SO libspdk_event_iscsi.so.6.0 00:04:09.347 SYMLINK libspdk_event_vhost_scsi.so 00:04:09.347 SYMLINK libspdk_event_iscsi.so 00:04:09.608 SO libspdk.so.6.0 00:04:09.608 SYMLINK libspdk.so 00:04:09.869 CC app/trace_record/trace_record.o 00:04:10.133 CC app/spdk_nvme_identify/identify.o 00:04:10.133 CC app/spdk_top/spdk_top.o 00:04:10.133 CC test/rpc_client/rpc_client_test.o 00:04:10.133 CXX app/trace/trace.o 00:04:10.133 CC app/spdk_nvme_discover/discovery_aer.o 00:04:10.133 TEST_HEADER include/spdk/accel.h 00:04:10.133 TEST_HEADER include/spdk/barrier.h 00:04:10.133 CC app/spdk_nvme_perf/perf.o 00:04:10.133 TEST_HEADER include/spdk/accel_module.h 00:04:10.133 TEST_HEADER include/spdk/assert.h 00:04:10.133 CC app/spdk_lspci/spdk_lspci.o 00:04:10.133 TEST_HEADER include/spdk/base64.h 00:04:10.133 TEST_HEADER include/spdk/bdev_module.h 00:04:10.133 TEST_HEADER include/spdk/bdev.h 00:04:10.133 TEST_HEADER include/spdk/bdev_zone.h 00:04:10.133 TEST_HEADER include/spdk/bit_array.h 00:04:10.133 TEST_HEADER include/spdk/bit_pool.h 00:04:10.133 TEST_HEADER include/spdk/blob_bdev.h 00:04:10.133 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:10.133 TEST_HEADER include/spdk/blobfs.h 00:04:10.133 TEST_HEADER include/spdk/blob.h 00:04:10.133 TEST_HEADER include/spdk/config.h 00:04:10.133 TEST_HEADER include/spdk/conf.h 00:04:10.133 TEST_HEADER include/spdk/cpuset.h 00:04:10.133 TEST_HEADER include/spdk/crc16.h 00:04:10.133 TEST_HEADER include/spdk/crc32.h 00:04:10.133 TEST_HEADER include/spdk/crc64.h 00:04:10.133 TEST_HEADER include/spdk/dif.h 00:04:10.133 TEST_HEADER include/spdk/dma.h 00:04:10.133 TEST_HEADER include/spdk/endian.h 00:04:10.133 TEST_HEADER include/spdk/env.h 00:04:10.133 TEST_HEADER include/spdk/event.h 00:04:10.133 TEST_HEADER include/spdk/env_dpdk.h 00:04:10.133 TEST_HEADER include/spdk/fd_group.h 00:04:10.133 TEST_HEADER include/spdk/fd.h 00:04:10.133 TEST_HEADER include/spdk/file.h 00:04:10.133 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:10.133 TEST_HEADER include/spdk/fsdev.h 00:04:10.133 TEST_HEADER include/spdk/fsdev_module.h 00:04:10.133 TEST_HEADER include/spdk/ftl.h 00:04:10.133 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:10.133 TEST_HEADER include/spdk/gpt_spec.h 00:04:10.133 TEST_HEADER include/spdk/histogram_data.h 00:04:10.133 CC app/iscsi_tgt/iscsi_tgt.o 00:04:10.133 TEST_HEADER include/spdk/hexlify.h 00:04:10.133 CC app/spdk_dd/spdk_dd.o 00:04:10.133 TEST_HEADER include/spdk/idxd.h 00:04:10.133 TEST_HEADER include/spdk/idxd_spec.h 00:04:10.133 TEST_HEADER include/spdk/init.h 00:04:10.133 TEST_HEADER include/spdk/ioat.h 00:04:10.133 CC app/nvmf_tgt/nvmf_main.o 00:04:10.133 TEST_HEADER include/spdk/iscsi_spec.h 00:04:10.133 TEST_HEADER include/spdk/ioat_spec.h 00:04:10.133 TEST_HEADER include/spdk/json.h 00:04:10.133 TEST_HEADER include/spdk/jsonrpc.h 00:04:10.133 TEST_HEADER include/spdk/keyring.h 00:04:10.133 TEST_HEADER include/spdk/keyring_module.h 00:04:10.133 TEST_HEADER include/spdk/likely.h 00:04:10.133 TEST_HEADER include/spdk/log.h 00:04:10.133 TEST_HEADER include/spdk/lvol.h 00:04:10.133 TEST_HEADER include/spdk/md5.h 00:04:10.133 TEST_HEADER include/spdk/memory.h 00:04:10.133 TEST_HEADER include/spdk/mmio.h 00:04:10.133 TEST_HEADER include/spdk/nbd.h 00:04:10.133 TEST_HEADER include/spdk/net.h 00:04:10.133 TEST_HEADER include/spdk/notify.h 00:04:10.133 TEST_HEADER include/spdk/nvme.h 00:04:10.133 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:10.133 TEST_HEADER include/spdk/nvme_intel.h 00:04:10.133 CC app/spdk_tgt/spdk_tgt.o 00:04:10.133 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:10.133 TEST_HEADER include/spdk/nvme_spec.h 00:04:10.133 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:10.133 TEST_HEADER include/spdk/nvme_zns.h 00:04:10.133 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:10.133 TEST_HEADER include/spdk/nvmf.h 00:04:10.133 TEST_HEADER include/spdk/opal.h 00:04:10.133 TEST_HEADER include/spdk/nvmf_spec.h 00:04:10.133 TEST_HEADER include/spdk/nvmf_transport.h 00:04:10.133 TEST_HEADER include/spdk/pci_ids.h 00:04:10.133 TEST_HEADER include/spdk/opal_spec.h 00:04:10.133 TEST_HEADER include/spdk/pipe.h 00:04:10.133 TEST_HEADER include/spdk/queue.h 00:04:10.133 TEST_HEADER include/spdk/rpc.h 00:04:10.133 TEST_HEADER include/spdk/reduce.h 00:04:10.133 TEST_HEADER include/spdk/scheduler.h 00:04:10.133 TEST_HEADER include/spdk/scsi.h 00:04:10.133 TEST_HEADER include/spdk/scsi_spec.h 00:04:10.133 TEST_HEADER include/spdk/sock.h 00:04:10.133 TEST_HEADER include/spdk/stdinc.h 00:04:10.133 TEST_HEADER include/spdk/string.h 00:04:10.133 TEST_HEADER include/spdk/thread.h 00:04:10.133 TEST_HEADER include/spdk/trace.h 00:04:10.133 TEST_HEADER include/spdk/trace_parser.h 00:04:10.133 TEST_HEADER include/spdk/tree.h 00:04:10.133 TEST_HEADER include/spdk/ublk.h 00:04:10.133 TEST_HEADER include/spdk/util.h 00:04:10.133 TEST_HEADER include/spdk/version.h 00:04:10.133 TEST_HEADER include/spdk/uuid.h 00:04:10.133 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:10.133 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:10.133 TEST_HEADER include/spdk/vhost.h 00:04:10.133 TEST_HEADER include/spdk/xor.h 00:04:10.134 TEST_HEADER include/spdk/vmd.h 00:04:10.134 TEST_HEADER include/spdk/zipf.h 00:04:10.134 CXX test/cpp_headers/accel.o 00:04:10.134 CXX test/cpp_headers/accel_module.o 00:04:10.134 CXX test/cpp_headers/assert.o 00:04:10.134 CXX test/cpp_headers/barrier.o 00:04:10.134 CXX test/cpp_headers/base64.o 00:04:10.134 CXX test/cpp_headers/bdev.o 00:04:10.134 CXX test/cpp_headers/bit_array.o 00:04:10.134 CXX test/cpp_headers/bdev_module.o 00:04:10.134 CXX test/cpp_headers/bdev_zone.o 00:04:10.134 CXX test/cpp_headers/bit_pool.o 00:04:10.134 CXX test/cpp_headers/blobfs_bdev.o 00:04:10.134 CXX test/cpp_headers/blob_bdev.o 00:04:10.134 CXX test/cpp_headers/blobfs.o 00:04:10.134 CXX test/cpp_headers/blob.o 00:04:10.134 CXX test/cpp_headers/conf.o 00:04:10.134 CXX test/cpp_headers/cpuset.o 00:04:10.134 CXX test/cpp_headers/config.o 00:04:10.134 CXX test/cpp_headers/crc16.o 00:04:10.134 CXX test/cpp_headers/crc32.o 00:04:10.134 CXX test/cpp_headers/crc64.o 00:04:10.134 CXX test/cpp_headers/dif.o 00:04:10.134 CXX test/cpp_headers/dma.o 00:04:10.134 CXX test/cpp_headers/endian.o 00:04:10.134 CXX test/cpp_headers/env_dpdk.o 00:04:10.134 CXX test/cpp_headers/env.o 00:04:10.134 CXX test/cpp_headers/fd_group.o 00:04:10.134 CXX test/cpp_headers/event.o 00:04:10.134 CXX test/cpp_headers/fd.o 00:04:10.134 CXX test/cpp_headers/file.o 00:04:10.134 CXX test/cpp_headers/fsdev_module.o 00:04:10.134 CXX test/cpp_headers/ftl.o 00:04:10.134 CXX test/cpp_headers/fsdev.o 00:04:10.134 CXX test/cpp_headers/gpt_spec.o 00:04:10.134 CXX test/cpp_headers/fuse_dispatcher.o 00:04:10.134 CXX test/cpp_headers/hexlify.o 00:04:10.134 CXX test/cpp_headers/histogram_data.o 00:04:10.134 CXX test/cpp_headers/idxd.o 00:04:10.134 CXX test/cpp_headers/idxd_spec.o 00:04:10.134 CXX test/cpp_headers/ioat.o 00:04:10.134 CXX test/cpp_headers/init.o 00:04:10.134 CXX test/cpp_headers/iscsi_spec.o 00:04:10.134 CXX test/cpp_headers/ioat_spec.o 00:04:10.134 CXX test/cpp_headers/json.o 00:04:10.134 CXX test/cpp_headers/jsonrpc.o 00:04:10.134 CXX test/cpp_headers/keyring.o 00:04:10.134 CXX test/cpp_headers/keyring_module.o 00:04:10.134 CXX test/cpp_headers/log.o 00:04:10.134 CXX test/cpp_headers/lvol.o 00:04:10.134 CXX test/cpp_headers/md5.o 00:04:10.134 CXX test/cpp_headers/memory.o 00:04:10.134 CXX test/cpp_headers/likely.o 00:04:10.134 CXX test/cpp_headers/mmio.o 00:04:10.134 CXX test/cpp_headers/net.o 00:04:10.134 CXX test/cpp_headers/notify.o 00:04:10.134 CXX test/cpp_headers/nbd.o 00:04:10.134 CXX test/cpp_headers/nvme_intel.o 00:04:10.134 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:10.134 CXX test/cpp_headers/nvme.o 00:04:10.134 CXX test/cpp_headers/nvme_ocssd.o 00:04:10.134 CC examples/util/zipf/zipf.o 00:04:10.134 CXX test/cpp_headers/nvme_spec.o 00:04:10.134 CXX test/cpp_headers/nvme_zns.o 00:04:10.401 CC examples/ioat/verify/verify.o 00:04:10.401 CXX test/cpp_headers/nvmf_cmd.o 00:04:10.401 CC test/thread/poller_perf/poller_perf.o 00:04:10.401 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:10.401 CXX test/cpp_headers/nvmf_spec.o 00:04:10.401 CC test/app/histogram_perf/histogram_perf.o 00:04:10.401 CXX test/cpp_headers/nvmf.o 00:04:10.401 CXX test/cpp_headers/opal_spec.o 00:04:10.401 CXX test/cpp_headers/nvmf_transport.o 00:04:10.401 CC test/app/jsoncat/jsoncat.o 00:04:10.401 CXX test/cpp_headers/opal.o 00:04:10.401 CXX test/cpp_headers/pipe.o 00:04:10.401 CXX test/cpp_headers/pci_ids.o 00:04:10.401 LINK spdk_lspci 00:04:10.401 CXX test/cpp_headers/queue.o 00:04:10.401 CXX test/cpp_headers/reduce.o 00:04:10.401 CXX test/cpp_headers/rpc.o 00:04:10.401 CC app/fio/nvme/fio_plugin.o 00:04:10.401 CC test/env/vtophys/vtophys.o 00:04:10.401 CXX test/cpp_headers/scheduler.o 00:04:10.401 CXX test/cpp_headers/scsi.o 00:04:10.401 CC test/env/pci/pci_ut.o 00:04:10.401 CC examples/ioat/perf/perf.o 00:04:10.401 CXX test/cpp_headers/scsi_spec.o 00:04:10.401 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:10.401 CXX test/cpp_headers/string.o 00:04:10.401 CXX test/cpp_headers/sock.o 00:04:10.401 CXX test/cpp_headers/stdinc.o 00:04:10.401 CXX test/cpp_headers/thread.o 00:04:10.401 CXX test/cpp_headers/trace_parser.o 00:04:10.401 CC test/app/stub/stub.o 00:04:10.401 CXX test/cpp_headers/trace.o 00:04:10.401 CXX test/cpp_headers/tree.o 00:04:10.401 CXX test/cpp_headers/util.o 00:04:10.401 CXX test/cpp_headers/ublk.o 00:04:10.401 CC test/env/memory/memory_ut.o 00:04:10.401 CXX test/cpp_headers/version.o 00:04:10.401 CXX test/cpp_headers/uuid.o 00:04:10.402 CXX test/cpp_headers/vfio_user_pci.o 00:04:10.402 CXX test/cpp_headers/vhost.o 00:04:10.402 CXX test/cpp_headers/vfio_user_spec.o 00:04:10.402 CXX test/cpp_headers/vmd.o 00:04:10.402 CXX test/cpp_headers/xor.o 00:04:10.402 CXX test/cpp_headers/zipf.o 00:04:10.402 CC test/app/bdev_svc/bdev_svc.o 00:04:10.402 CC test/dma/test_dma/test_dma.o 00:04:10.402 CC app/fio/bdev/fio_plugin.o 00:04:10.402 LINK rpc_client_test 00:04:10.402 LINK spdk_nvme_discover 00:04:10.670 LINK interrupt_tgt 00:04:10.670 LINK spdk_trace_record 00:04:10.670 LINK nvmf_tgt 00:04:10.929 LINK spdk_tgt 00:04:10.929 LINK iscsi_tgt 00:04:10.929 CC test/env/mem_callbacks/mem_callbacks.o 00:04:10.930 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:10.930 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:10.930 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:10.930 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:10.930 LINK spdk_dd 00:04:11.189 LINK env_dpdk_post_init 00:04:11.189 LINK verify 00:04:11.189 LINK poller_perf 00:04:11.189 LINK spdk_trace 00:04:11.189 LINK histogram_perf 00:04:11.189 LINK zipf 00:04:11.189 LINK vtophys 00:04:11.451 LINK jsoncat 00:04:11.451 LINK bdev_svc 00:04:11.451 LINK stub 00:04:11.451 LINK ioat_perf 00:04:11.711 LINK spdk_bdev 00:04:11.711 CC app/vhost/vhost.o 00:04:11.711 LINK nvme_fuzz 00:04:11.711 CC test/event/event_perf/event_perf.o 00:04:11.711 LINK pci_ut 00:04:11.711 CC test/event/reactor_perf/reactor_perf.o 00:04:11.711 CC test/event/reactor/reactor.o 00:04:11.711 LINK vhost_fuzz 00:04:11.711 CC test/event/app_repeat/app_repeat.o 00:04:11.711 LINK test_dma 00:04:11.711 CC test/event/scheduler/scheduler.o 00:04:11.711 LINK spdk_nvme 00:04:11.712 LINK spdk_nvme_identify 00:04:11.974 LINK mem_callbacks 00:04:11.974 LINK vhost 00:04:11.974 LINK reactor_perf 00:04:11.974 LINK spdk_nvme_perf 00:04:11.974 LINK event_perf 00:04:11.974 CC examples/vmd/led/led.o 00:04:11.974 LINK reactor 00:04:11.974 CC examples/vmd/lsvmd/lsvmd.o 00:04:11.974 CC examples/idxd/perf/perf.o 00:04:11.974 CC examples/sock/hello_world/hello_sock.o 00:04:11.974 LINK spdk_top 00:04:11.974 CC examples/thread/thread/thread_ex.o 00:04:11.974 LINK app_repeat 00:04:11.974 LINK scheduler 00:04:11.974 LINK lsvmd 00:04:11.974 LINK led 00:04:12.240 LINK hello_sock 00:04:12.240 LINK thread 00:04:12.240 LINK idxd_perf 00:04:12.240 CC test/nvme/e2edp/nvme_dp.o 00:04:12.240 CC test/nvme/overhead/overhead.o 00:04:12.240 CC test/nvme/reset/reset.o 00:04:12.240 CC test/nvme/reserve/reserve.o 00:04:12.240 CC test/nvme/err_injection/err_injection.o 00:04:12.240 CC test/nvme/sgl/sgl.o 00:04:12.240 CC test/nvme/aer/aer.o 00:04:12.240 CC test/nvme/connect_stress/connect_stress.o 00:04:12.240 CC test/nvme/boot_partition/boot_partition.o 00:04:12.240 CC test/nvme/cuse/cuse.o 00:04:12.240 CC test/nvme/compliance/nvme_compliance.o 00:04:12.240 CC test/nvme/simple_copy/simple_copy.o 00:04:12.240 CC test/nvme/fdp/fdp.o 00:04:12.240 CC test/nvme/startup/startup.o 00:04:12.240 CC test/nvme/fused_ordering/fused_ordering.o 00:04:12.240 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:12.502 LINK memory_ut 00:04:12.502 CC test/accel/dif/dif.o 00:04:12.502 CC test/blobfs/mkfs/mkfs.o 00:04:12.502 CC test/lvol/esnap/esnap.o 00:04:12.502 LINK boot_partition 00:04:12.502 LINK startup 00:04:12.502 LINK connect_stress 00:04:12.502 LINK err_injection 00:04:12.502 LINK reserve 00:04:12.502 LINK doorbell_aers 00:04:12.502 LINK fused_ordering 00:04:12.764 LINK simple_copy 00:04:12.764 LINK mkfs 00:04:12.764 LINK reset 00:04:12.764 LINK overhead 00:04:12.764 LINK aer 00:04:12.764 LINK nvme_dp 00:04:12.764 LINK sgl 00:04:12.764 LINK nvme_compliance 00:04:12.764 LINK fdp 00:04:12.764 LINK iscsi_fuzz 00:04:12.764 CC examples/nvme/hello_world/hello_world.o 00:04:12.764 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:12.764 CC examples/nvme/reconnect/reconnect.o 00:04:12.764 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:12.764 CC examples/nvme/abort/abort.o 00:04:12.764 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:12.764 CC examples/nvme/arbitration/arbitration.o 00:04:12.764 CC examples/nvme/hotplug/hotplug.o 00:04:12.764 CC examples/accel/perf/accel_perf.o 00:04:13.026 CC examples/blob/hello_world/hello_blob.o 00:04:13.026 CC examples/blob/cli/blobcli.o 00:04:13.026 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:13.026 LINK cmb_copy 00:04:13.026 LINK pmr_persistence 00:04:13.026 LINK hello_world 00:04:13.026 LINK dif 00:04:13.026 LINK hotplug 00:04:13.026 LINK arbitration 00:04:13.287 LINK abort 00:04:13.287 LINK reconnect 00:04:13.287 LINK hello_blob 00:04:13.287 LINK hello_fsdev 00:04:13.287 LINK nvme_manage 00:04:13.287 LINK accel_perf 00:04:13.549 LINK blobcli 00:04:13.549 LINK cuse 00:04:13.549 CC test/bdev/bdevio/bdevio.o 00:04:14.123 CC examples/bdev/hello_world/hello_bdev.o 00:04:14.123 CC examples/bdev/bdevperf/bdevperf.o 00:04:14.123 LINK bdevio 00:04:14.123 LINK hello_bdev 00:04:14.695 LINK bdevperf 00:04:15.268 CC examples/nvmf/nvmf/nvmf.o 00:04:15.840 LINK nvmf 00:04:17.226 LINK esnap 00:04:17.226 00:04:17.226 real 0m55.023s 00:04:17.226 user 8m8.773s 00:04:17.226 sys 5m33.687s 00:04:17.226 09:36:32 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:17.227 09:36:32 make -- common/autotest_common.sh@10 -- $ set +x 00:04:17.227 ************************************ 00:04:17.227 END TEST make 00:04:17.227 ************************************ 00:04:17.489 09:36:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:17.489 09:36:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:17.489 09:36:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:17.489 09:36:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:17.489 09:36:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:17.489 09:36:32 -- pm/common@44 -- $ pid=3552488 00:04:17.489 09:36:32 -- pm/common@50 -- $ kill -TERM 3552488 00:04:17.489 09:36:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:17.489 09:36:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:17.489 09:36:32 -- pm/common@44 -- $ pid=3552489 00:04:17.489 09:36:32 -- pm/common@50 -- $ kill -TERM 3552489 00:04:17.489 09:36:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:17.489 09:36:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:17.489 09:36:32 -- pm/common@44 -- $ pid=3552491 00:04:17.489 09:36:32 -- pm/common@50 -- $ kill -TERM 3552491 00:04:17.489 09:36:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:17.489 09:36:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:17.489 09:36:32 -- pm/common@44 -- $ pid=3552515 00:04:17.489 09:36:32 -- pm/common@50 -- $ sudo -E kill -TERM 3552515 00:04:17.489 09:36:32 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:17.489 09:36:32 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:17.489 09:36:32 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.489 09:36:32 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.489 09:36:32 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:17.489 09:36:32 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:17.489 09:36:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.489 09:36:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.489 09:36:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.489 09:36:32 -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.489 09:36:32 -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.489 09:36:32 -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.489 09:36:32 -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.489 09:36:32 -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.489 09:36:32 -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.489 09:36:32 -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.489 09:36:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.489 09:36:32 -- scripts/common.sh@344 -- # case "$op" in 00:04:17.489 09:36:32 -- scripts/common.sh@345 -- # : 1 00:04:17.489 09:36:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.489 09:36:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.489 09:36:32 -- scripts/common.sh@365 -- # decimal 1 00:04:17.489 09:36:32 -- scripts/common.sh@353 -- # local d=1 00:04:17.489 09:36:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.489 09:36:32 -- scripts/common.sh@355 -- # echo 1 00:04:17.489 09:36:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.489 09:36:32 -- scripts/common.sh@366 -- # decimal 2 00:04:17.489 09:36:32 -- scripts/common.sh@353 -- # local d=2 00:04:17.489 09:36:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.489 09:36:32 -- scripts/common.sh@355 -- # echo 2 00:04:17.751 09:36:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.751 09:36:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.751 09:36:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.751 09:36:32 -- scripts/common.sh@368 -- # return 0 00:04:17.751 09:36:32 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.751 09:36:32 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.751 --rc genhtml_branch_coverage=1 00:04:17.751 --rc genhtml_function_coverage=1 00:04:17.751 --rc genhtml_legend=1 00:04:17.751 --rc geninfo_all_blocks=1 00:04:17.751 --rc geninfo_unexecuted_blocks=1 00:04:17.751 00:04:17.751 ' 00:04:17.751 09:36:32 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.751 --rc genhtml_branch_coverage=1 00:04:17.751 --rc genhtml_function_coverage=1 00:04:17.751 --rc genhtml_legend=1 00:04:17.751 --rc geninfo_all_blocks=1 00:04:17.751 --rc geninfo_unexecuted_blocks=1 00:04:17.751 00:04:17.751 ' 00:04:17.751 09:36:32 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.751 --rc genhtml_branch_coverage=1 00:04:17.751 --rc genhtml_function_coverage=1 00:04:17.751 --rc genhtml_legend=1 00:04:17.751 --rc geninfo_all_blocks=1 00:04:17.751 --rc geninfo_unexecuted_blocks=1 00:04:17.751 00:04:17.751 ' 00:04:17.751 09:36:32 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.751 --rc genhtml_branch_coverage=1 00:04:17.751 --rc genhtml_function_coverage=1 00:04:17.751 --rc genhtml_legend=1 00:04:17.751 --rc geninfo_all_blocks=1 00:04:17.751 --rc geninfo_unexecuted_blocks=1 00:04:17.751 00:04:17.751 ' 00:04:17.751 09:36:32 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:17.751 09:36:32 -- nvmf/common.sh@7 -- # uname -s 00:04:17.751 09:36:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:17.751 09:36:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:17.751 09:36:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:17.751 09:36:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:17.751 09:36:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:17.751 09:36:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:17.751 09:36:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:17.751 09:36:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:17.751 09:36:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:17.751 09:36:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:17.751 09:36:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:17.751 09:36:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:17.751 09:36:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:17.751 09:36:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:17.751 09:36:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:17.751 09:36:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:17.751 09:36:32 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:17.751 09:36:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:17.751 09:36:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:17.751 09:36:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:17.752 09:36:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:17.752 09:36:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.752 09:36:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.752 09:36:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.752 09:36:32 -- paths/export.sh@5 -- # export PATH 00:04:17.752 09:36:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.752 09:36:32 -- nvmf/common.sh@51 -- # : 0 00:04:17.752 09:36:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:17.752 09:36:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:17.752 09:36:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:17.752 09:36:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:17.752 09:36:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:17.752 09:36:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:17.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:17.752 09:36:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:17.752 09:36:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:17.752 09:36:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:17.752 09:36:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:17.752 09:36:32 -- spdk/autotest.sh@32 -- # uname -s 00:04:17.752 09:36:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:17.752 09:36:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:17.752 09:36:32 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:17.752 09:36:33 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:17.752 09:36:33 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:17.752 09:36:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:17.752 09:36:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:17.752 09:36:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:17.752 09:36:33 -- spdk/autotest.sh@48 -- # udevadm_pid=3618287 00:04:17.752 09:36:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:17.752 09:36:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:17.752 09:36:33 -- pm/common@17 -- # local monitor 00:04:17.752 09:36:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:17.752 09:36:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:17.752 09:36:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:17.752 09:36:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:17.752 09:36:33 -- pm/common@21 -- # date +%s 00:04:17.752 09:36:33 -- pm/common@25 -- # sleep 1 00:04:17.752 09:36:33 -- pm/common@21 -- # date +%s 00:04:17.752 09:36:33 -- pm/common@21 -- # date +%s 00:04:17.752 09:36:33 -- pm/common@21 -- # date +%s 00:04:17.752 09:36:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732696593 00:04:17.752 09:36:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732696593 00:04:17.752 09:36:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732696593 00:04:17.752 09:36:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732696593 00:04:17.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732696593_collect-cpu-load.pm.log 00:04:17.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732696593_collect-vmstat.pm.log 00:04:17.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732696593_collect-cpu-temp.pm.log 00:04:17.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732696593_collect-bmc-pm.bmc.pm.log 00:04:18.695 09:36:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:18.695 09:36:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:18.695 09:36:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.695 09:36:34 -- common/autotest_common.sh@10 -- # set +x 00:04:18.695 09:36:34 -- spdk/autotest.sh@59 -- # create_test_list 00:04:18.695 09:36:34 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:18.695 09:36:34 -- common/autotest_common.sh@10 -- # set +x 00:04:18.695 09:36:34 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:18.695 09:36:34 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:18.695 09:36:34 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:18.695 09:36:34 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:18.695 09:36:34 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:18.695 09:36:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:18.695 09:36:34 -- common/autotest_common.sh@1457 -- # uname 00:04:18.695 09:36:34 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:18.695 09:36:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:18.695 09:36:34 -- common/autotest_common.sh@1477 -- # uname 00:04:18.695 09:36:34 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:18.695 09:36:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:18.695 09:36:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:18.955 lcov: LCOV version 1.15 00:04:18.955 09:36:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:33.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:33.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:52.086 09:37:04 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:52.086 09:37:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.086 09:37:04 -- common/autotest_common.sh@10 -- # set +x 00:04:52.086 09:37:04 -- spdk/autotest.sh@78 -- # rm -f 00:04:52.086 09:37:04 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.659 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:52.659 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:52.921 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:52.921 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:52.921 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:52.921 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:52.921 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:52.921 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:52.921 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:52.921 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:52.921 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:52.921 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:53.182 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:53.182 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:53.182 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:53.182 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:53.182 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:53.443 09:37:08 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:53.443 09:37:08 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:53.443 09:37:08 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:53.443 09:37:08 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:53.443 09:37:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:53.443 09:37:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:53.443 09:37:08 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:53.443 09:37:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:53.443 09:37:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:53.443 09:37:08 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:53.443 09:37:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:53.443 09:37:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:53.443 09:37:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:53.443 09:37:08 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:53.443 09:37:08 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:53.443 No valid GPT data, bailing 00:04:53.443 09:37:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:53.443 09:37:08 -- scripts/common.sh@394 -- # pt= 00:04:53.443 09:37:08 -- scripts/common.sh@395 -- # return 1 00:04:53.443 09:37:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:53.443 1+0 records in 00:04:53.443 1+0 records out 00:04:53.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00146476 s, 716 MB/s 00:04:53.443 09:37:08 -- spdk/autotest.sh@105 -- # sync 00:04:53.443 09:37:08 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:53.443 09:37:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:53.443 09:37:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:03.450 09:37:17 -- spdk/autotest.sh@111 -- # uname -s 00:05:03.450 09:37:17 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:03.450 09:37:17 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:03.450 09:37:17 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:05.995 Hugepages 00:05:05.995 node hugesize free / total 00:05:05.995 node0 1048576kB 0 / 0 00:05:05.995 node0 2048kB 0 / 0 00:05:05.995 node1 1048576kB 0 / 0 00:05:05.995 node1 2048kB 0 / 0 00:05:05.995 00:05:05.995 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:05.995 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:05.995 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:05.995 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:05.995 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:05.995 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:05.995 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:05.995 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:05.995 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:05.995 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:05.995 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:05.996 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:05.996 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:05.996 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:05.996 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:05.996 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:05.996 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:05.996 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:05.996 09:37:21 -- spdk/autotest.sh@117 -- # uname -s 00:05:05.996 09:37:21 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:05.996 09:37:21 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:05.996 09:37:21 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.296 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:09.296 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:09.297 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:11.209 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:11.470 09:37:26 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:12.414 09:37:27 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:12.414 09:37:27 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:12.414 09:37:27 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:12.414 09:37:27 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:12.414 09:37:27 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:12.414 09:37:27 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:12.414 09:37:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.414 09:37:27 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:12.414 09:37:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:12.674 09:37:27 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:12.674 09:37:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:12.674 09:37:27 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:15.975 Waiting for block devices as requested 00:05:15.975 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:16.236 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:16.236 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:16.236 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:16.496 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:16.496 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:16.496 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:16.496 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:16.757 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:16.757 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:17.018 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:17.018 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:17.018 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:17.279 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:17.279 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:17.279 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:17.541 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:17.803 09:37:33 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:17.803 09:37:33 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:17.803 09:37:33 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:17.803 09:37:33 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:17.803 09:37:33 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:17.803 09:37:33 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:17.803 09:37:33 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:17.803 09:37:33 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:17.803 09:37:33 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:17.803 09:37:33 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:17.803 09:37:33 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:17.803 09:37:33 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:17.803 09:37:33 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:17.803 09:37:33 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:05:17.803 09:37:33 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:17.803 09:37:33 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:17.803 09:37:33 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:17.803 09:37:33 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:17.803 09:37:33 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:17.803 09:37:33 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:17.803 09:37:33 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:17.803 09:37:33 -- common/autotest_common.sh@1543 -- # continue 00:05:17.803 09:37:33 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:17.803 09:37:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.803 09:37:33 -- common/autotest_common.sh@10 -- # set +x 00:05:17.803 09:37:33 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:17.803 09:37:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.803 09:37:33 -- common/autotest_common.sh@10 -- # set +x 00:05:17.803 09:37:33 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:22.010 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:22.010 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:22.010 09:37:37 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:22.010 09:37:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.010 09:37:37 -- common/autotest_common.sh@10 -- # set +x 00:05:22.010 09:37:37 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:22.010 09:37:37 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:22.010 09:37:37 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:22.010 09:37:37 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:22.010 09:37:37 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:22.010 09:37:37 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:22.010 09:37:37 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:22.010 09:37:37 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:22.010 09:37:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:22.010 09:37:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:22.010 09:37:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:22.010 09:37:37 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:22.010 09:37:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:22.010 09:37:37 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:22.010 09:37:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:22.010 09:37:37 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:22.010 09:37:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:22.010 09:37:37 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:05:22.010 09:37:37 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:22.010 09:37:37 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:22.010 09:37:37 -- common/autotest_common.sh@1572 -- # return 0 00:05:22.010 09:37:37 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:22.010 09:37:37 -- common/autotest_common.sh@1580 -- # return 0 00:05:22.010 09:37:37 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:22.010 09:37:37 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:22.010 09:37:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:22.010 09:37:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:22.010 09:37:37 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:22.010 09:37:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.010 09:37:37 -- common/autotest_common.sh@10 -- # set +x 00:05:22.010 09:37:37 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:22.010 09:37:37 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:22.010 09:37:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.010 09:37:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.010 09:37:37 -- common/autotest_common.sh@10 -- # set +x 00:05:22.010 ************************************ 00:05:22.010 START TEST env 00:05:22.010 ************************************ 00:05:22.010 09:37:37 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:22.270 * Looking for test storage... 00:05:22.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:22.271 09:37:37 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.271 09:37:37 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.271 09:37:37 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.271 09:37:37 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.271 09:37:37 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.271 09:37:37 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.271 09:37:37 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.271 09:37:37 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.271 09:37:37 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.271 09:37:37 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.271 09:37:37 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.271 09:37:37 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.271 09:37:37 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.271 09:37:37 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.271 09:37:37 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.271 09:37:37 env -- scripts/common.sh@344 -- # case "$op" in 00:05:22.271 09:37:37 env -- scripts/common.sh@345 -- # : 1 00:05:22.271 09:37:37 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.271 09:37:37 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.271 09:37:37 env -- scripts/common.sh@365 -- # decimal 1 00:05:22.271 09:37:37 env -- scripts/common.sh@353 -- # local d=1 00:05:22.271 09:37:37 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.271 09:37:37 env -- scripts/common.sh@355 -- # echo 1 00:05:22.271 09:37:37 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.271 09:37:37 env -- scripts/common.sh@366 -- # decimal 2 00:05:22.271 09:37:37 env -- scripts/common.sh@353 -- # local d=2 00:05:22.271 09:37:37 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.271 09:37:37 env -- scripts/common.sh@355 -- # echo 2 00:05:22.271 09:37:37 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.271 09:37:37 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.271 09:37:37 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.271 09:37:37 env -- scripts/common.sh@368 -- # return 0 00:05:22.271 09:37:37 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.271 09:37:37 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.271 --rc genhtml_branch_coverage=1 00:05:22.271 --rc genhtml_function_coverage=1 00:05:22.271 --rc genhtml_legend=1 00:05:22.271 --rc geninfo_all_blocks=1 00:05:22.271 --rc geninfo_unexecuted_blocks=1 00:05:22.271 00:05:22.271 ' 00:05:22.271 09:37:37 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.271 --rc genhtml_branch_coverage=1 00:05:22.271 --rc genhtml_function_coverage=1 00:05:22.271 --rc genhtml_legend=1 00:05:22.271 --rc geninfo_all_blocks=1 00:05:22.271 --rc geninfo_unexecuted_blocks=1 00:05:22.271 00:05:22.271 ' 00:05:22.271 09:37:37 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.271 --rc genhtml_branch_coverage=1 00:05:22.271 --rc genhtml_function_coverage=1 00:05:22.271 --rc genhtml_legend=1 00:05:22.271 --rc geninfo_all_blocks=1 00:05:22.271 --rc geninfo_unexecuted_blocks=1 00:05:22.271 00:05:22.271 ' 00:05:22.271 09:37:37 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.271 --rc genhtml_branch_coverage=1 00:05:22.271 --rc genhtml_function_coverage=1 00:05:22.271 --rc genhtml_legend=1 00:05:22.271 --rc geninfo_all_blocks=1 00:05:22.271 --rc geninfo_unexecuted_blocks=1 00:05:22.271 00:05:22.271 ' 00:05:22.271 09:37:37 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:22.271 09:37:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.271 09:37:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.271 09:37:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.271 ************************************ 00:05:22.271 START TEST env_memory 00:05:22.271 ************************************ 00:05:22.271 09:37:37 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:22.271 00:05:22.271 00:05:22.271 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.271 http://cunit.sourceforge.net/ 00:05:22.271 00:05:22.271 00:05:22.271 Suite: memory 00:05:22.532 Test: alloc and free memory map ...[2024-11-27 09:37:37.765568] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:22.532 passed 00:05:22.533 Test: mem map translation ...[2024-11-27 09:37:37.791240] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:22.533 [2024-11-27 09:37:37.791285] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:22.533 [2024-11-27 09:37:37.791332] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:22.533 [2024-11-27 09:37:37.791339] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:22.533 passed 00:05:22.533 Test: mem map registration ...[2024-11-27 09:37:37.846539] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:22.533 [2024-11-27 09:37:37.846564] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:22.533 passed 00:05:22.533 Test: mem map adjacent registrations ...passed 00:05:22.533 00:05:22.533 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.533 suites 1 1 n/a 0 0 00:05:22.533 tests 4 4 4 0 0 00:05:22.533 asserts 152 152 152 0 n/a 00:05:22.533 00:05:22.533 Elapsed time = 0.195 seconds 00:05:22.533 00:05:22.533 real 0m0.210s 00:05:22.533 user 0m0.195s 00:05:22.533 sys 0m0.014s 00:05:22.533 09:37:37 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.533 09:37:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:22.533 ************************************ 00:05:22.533 END TEST env_memory 00:05:22.533 ************************************ 00:05:22.533 09:37:37 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:22.533 09:37:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.533 09:37:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.533 09:37:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.795 ************************************ 00:05:22.795 START TEST env_vtophys 00:05:22.795 ************************************ 00:05:22.795 09:37:38 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:22.795 EAL: lib.eal log level changed from notice to debug 00:05:22.795 EAL: Detected lcore 0 as core 0 on socket 0 00:05:22.795 EAL: Detected lcore 1 as core 1 on socket 0 00:05:22.795 EAL: Detected lcore 2 as core 2 on socket 0 00:05:22.795 EAL: Detected lcore 3 as core 3 on socket 0 00:05:22.795 EAL: Detected lcore 4 as core 4 on socket 0 00:05:22.795 EAL: Detected lcore 5 as core 5 on socket 0 00:05:22.795 EAL: Detected lcore 6 as core 6 on socket 0 00:05:22.795 EAL: Detected lcore 7 as core 7 on socket 0 00:05:22.795 EAL: Detected lcore 8 as core 8 on socket 0 00:05:22.795 EAL: Detected lcore 9 as core 9 on socket 0 00:05:22.795 EAL: Detected lcore 10 as core 10 on socket 0 00:05:22.795 EAL: Detected lcore 11 as core 11 on socket 0 00:05:22.795 EAL: Detected lcore 12 as core 12 on socket 0 00:05:22.795 EAL: Detected lcore 13 as core 13 on socket 0 00:05:22.795 EAL: Detected lcore 14 as core 14 on socket 0 00:05:22.795 EAL: Detected lcore 15 as core 15 on socket 0 00:05:22.795 EAL: Detected lcore 16 as core 16 on socket 0 00:05:22.795 EAL: Detected lcore 17 as core 17 on socket 0 00:05:22.795 EAL: Detected lcore 18 as core 18 on socket 0 00:05:22.795 EAL: Detected lcore 19 as core 19 on socket 0 00:05:22.795 EAL: Detected lcore 20 as core 20 on socket 0 00:05:22.795 EAL: Detected lcore 21 as core 21 on socket 0 00:05:22.795 EAL: Detected lcore 22 as core 22 on socket 0 00:05:22.795 EAL: Detected lcore 23 as core 23 on socket 0 00:05:22.795 EAL: Detected lcore 24 as core 24 on socket 0 00:05:22.795 EAL: Detected lcore 25 as core 25 on socket 0 00:05:22.795 EAL: Detected lcore 26 as core 26 on socket 0 00:05:22.795 EAL: Detected lcore 27 as core 27 on socket 0 00:05:22.795 EAL: Detected lcore 28 as core 28 on socket 0 00:05:22.795 EAL: Detected lcore 29 as core 29 on socket 0 00:05:22.795 EAL: Detected lcore 30 as core 30 on socket 0 00:05:22.795 EAL: Detected lcore 31 as core 31 on socket 0 00:05:22.795 EAL: Detected lcore 32 as core 32 on socket 0 00:05:22.795 EAL: Detected lcore 33 as core 33 on socket 0 00:05:22.795 EAL: Detected lcore 34 as core 34 on socket 0 00:05:22.795 EAL: Detected lcore 35 as core 35 on socket 0 00:05:22.795 EAL: Detected lcore 36 as core 0 on socket 1 00:05:22.795 EAL: Detected lcore 37 as core 1 on socket 1 00:05:22.795 EAL: Detected lcore 38 as core 2 on socket 1 00:05:22.795 EAL: Detected lcore 39 as core 3 on socket 1 00:05:22.795 EAL: Detected lcore 40 as core 4 on socket 1 00:05:22.795 EAL: Detected lcore 41 as core 5 on socket 1 00:05:22.795 EAL: Detected lcore 42 as core 6 on socket 1 00:05:22.795 EAL: Detected lcore 43 as core 7 on socket 1 00:05:22.795 EAL: Detected lcore 44 as core 8 on socket 1 00:05:22.795 EAL: Detected lcore 45 as core 9 on socket 1 00:05:22.795 EAL: Detected lcore 46 as core 10 on socket 1 00:05:22.795 EAL: Detected lcore 47 as core 11 on socket 1 00:05:22.795 EAL: Detected lcore 48 as core 12 on socket 1 00:05:22.795 EAL: Detected lcore 49 as core 13 on socket 1 00:05:22.795 EAL: Detected lcore 50 as core 14 on socket 1 00:05:22.795 EAL: Detected lcore 51 as core 15 on socket 1 00:05:22.795 EAL: Detected lcore 52 as core 16 on socket 1 00:05:22.795 EAL: Detected lcore 53 as core 17 on socket 1 00:05:22.795 EAL: Detected lcore 54 as core 18 on socket 1 00:05:22.795 EAL: Detected lcore 55 as core 19 on socket 1 00:05:22.795 EAL: Detected lcore 56 as core 20 on socket 1 00:05:22.795 EAL: Detected lcore 57 as core 21 on socket 1 00:05:22.795 EAL: Detected lcore 58 as core 22 on socket 1 00:05:22.795 EAL: Detected lcore 59 as core 23 on socket 1 00:05:22.795 EAL: Detected lcore 60 as core 24 on socket 1 00:05:22.795 EAL: Detected lcore 61 as core 25 on socket 1 00:05:22.795 EAL: Detected lcore 62 as core 26 on socket 1 00:05:22.795 EAL: Detected lcore 63 as core 27 on socket 1 00:05:22.795 EAL: Detected lcore 64 as core 28 on socket 1 00:05:22.795 EAL: Detected lcore 65 as core 29 on socket 1 00:05:22.795 EAL: Detected lcore 66 as core 30 on socket 1 00:05:22.795 EAL: Detected lcore 67 as core 31 on socket 1 00:05:22.795 EAL: Detected lcore 68 as core 32 on socket 1 00:05:22.795 EAL: Detected lcore 69 as core 33 on socket 1 00:05:22.795 EAL: Detected lcore 70 as core 34 on socket 1 00:05:22.795 EAL: Detected lcore 71 as core 35 on socket 1 00:05:22.795 EAL: Detected lcore 72 as core 0 on socket 0 00:05:22.795 EAL: Detected lcore 73 as core 1 on socket 0 00:05:22.795 EAL: Detected lcore 74 as core 2 on socket 0 00:05:22.795 EAL: Detected lcore 75 as core 3 on socket 0 00:05:22.795 EAL: Detected lcore 76 as core 4 on socket 0 00:05:22.795 EAL: Detected lcore 77 as core 5 on socket 0 00:05:22.795 EAL: Detected lcore 78 as core 6 on socket 0 00:05:22.795 EAL: Detected lcore 79 as core 7 on socket 0 00:05:22.795 EAL: Detected lcore 80 as core 8 on socket 0 00:05:22.795 EAL: Detected lcore 81 as core 9 on socket 0 00:05:22.795 EAL: Detected lcore 82 as core 10 on socket 0 00:05:22.795 EAL: Detected lcore 83 as core 11 on socket 0 00:05:22.795 EAL: Detected lcore 84 as core 12 on socket 0 00:05:22.795 EAL: Detected lcore 85 as core 13 on socket 0 00:05:22.795 EAL: Detected lcore 86 as core 14 on socket 0 00:05:22.795 EAL: Detected lcore 87 as core 15 on socket 0 00:05:22.795 EAL: Detected lcore 88 as core 16 on socket 0 00:05:22.795 EAL: Detected lcore 89 as core 17 on socket 0 00:05:22.795 EAL: Detected lcore 90 as core 18 on socket 0 00:05:22.795 EAL: Detected lcore 91 as core 19 on socket 0 00:05:22.795 EAL: Detected lcore 92 as core 20 on socket 0 00:05:22.795 EAL: Detected lcore 93 as core 21 on socket 0 00:05:22.795 EAL: Detected lcore 94 as core 22 on socket 0 00:05:22.795 EAL: Detected lcore 95 as core 23 on socket 0 00:05:22.795 EAL: Detected lcore 96 as core 24 on socket 0 00:05:22.795 EAL: Detected lcore 97 as core 25 on socket 0 00:05:22.795 EAL: Detected lcore 98 as core 26 on socket 0 00:05:22.795 EAL: Detected lcore 99 as core 27 on socket 0 00:05:22.796 EAL: Detected lcore 100 as core 28 on socket 0 00:05:22.796 EAL: Detected lcore 101 as core 29 on socket 0 00:05:22.796 EAL: Detected lcore 102 as core 30 on socket 0 00:05:22.796 EAL: Detected lcore 103 as core 31 on socket 0 00:05:22.796 EAL: Detected lcore 104 as core 32 on socket 0 00:05:22.796 EAL: Detected lcore 105 as core 33 on socket 0 00:05:22.796 EAL: Detected lcore 106 as core 34 on socket 0 00:05:22.796 EAL: Detected lcore 107 as core 35 on socket 0 00:05:22.796 EAL: Detected lcore 108 as core 0 on socket 1 00:05:22.796 EAL: Detected lcore 109 as core 1 on socket 1 00:05:22.796 EAL: Detected lcore 110 as core 2 on socket 1 00:05:22.796 EAL: Detected lcore 111 as core 3 on socket 1 00:05:22.796 EAL: Detected lcore 112 as core 4 on socket 1 00:05:22.796 EAL: Detected lcore 113 as core 5 on socket 1 00:05:22.796 EAL: Detected lcore 114 as core 6 on socket 1 00:05:22.796 EAL: Detected lcore 115 as core 7 on socket 1 00:05:22.796 EAL: Detected lcore 116 as core 8 on socket 1 00:05:22.796 EAL: Detected lcore 117 as core 9 on socket 1 00:05:22.796 EAL: Detected lcore 118 as core 10 on socket 1 00:05:22.796 EAL: Detected lcore 119 as core 11 on socket 1 00:05:22.796 EAL: Detected lcore 120 as core 12 on socket 1 00:05:22.796 EAL: Detected lcore 121 as core 13 on socket 1 00:05:22.796 EAL: Detected lcore 122 as core 14 on socket 1 00:05:22.796 EAL: Detected lcore 123 as core 15 on socket 1 00:05:22.796 EAL: Detected lcore 124 as core 16 on socket 1 00:05:22.796 EAL: Detected lcore 125 as core 17 on socket 1 00:05:22.796 EAL: Detected lcore 126 as core 18 on socket 1 00:05:22.796 EAL: Detected lcore 127 as core 19 on socket 1 00:05:22.796 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:22.796 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:22.796 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:22.796 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:22.796 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:22.796 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:22.796 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:22.796 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:22.796 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:22.796 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:22.796 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:22.796 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:22.796 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:22.796 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:22.796 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:22.796 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:22.796 EAL: Maximum logical cores by configuration: 128 00:05:22.796 EAL: Detected CPU lcores: 128 00:05:22.796 EAL: Detected NUMA nodes: 2 00:05:22.796 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:22.796 EAL: Detected shared linkage of DPDK 00:05:22.796 EAL: No shared files mode enabled, IPC will be disabled 00:05:22.796 EAL: Bus pci wants IOVA as 'DC' 00:05:22.796 EAL: Buses did not request a specific IOVA mode. 00:05:22.796 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:22.796 EAL: Selected IOVA mode 'VA' 00:05:22.796 EAL: Probing VFIO support... 00:05:22.796 EAL: IOMMU type 1 (Type 1) is supported 00:05:22.796 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:22.796 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:22.796 EAL: VFIO support initialized 00:05:22.796 EAL: Ask a virtual area of 0x2e000 bytes 00:05:22.796 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:22.796 EAL: Setting up physically contiguous memory... 00:05:22.796 EAL: Setting maximum number of open files to 524288 00:05:22.796 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:22.796 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:22.796 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:22.796 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.796 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:22.796 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.796 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.796 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:22.796 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:22.796 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.796 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:22.796 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.796 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.796 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:22.796 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:22.796 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.796 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:22.796 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.796 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.796 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:22.796 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:22.796 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.796 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:22.796 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.796 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.796 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:22.796 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:22.796 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:22.796 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.796 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:22.796 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.796 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.796 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:22.796 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:22.796 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.796 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:22.796 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.796 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.796 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:22.796 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:22.796 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.796 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:22.796 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.796 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.796 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:22.796 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:22.796 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.796 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:22.796 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.796 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.796 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:22.796 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:22.796 EAL: Hugepages will be freed exactly as allocated. 00:05:22.796 EAL: No shared files mode enabled, IPC is disabled 00:05:22.796 EAL: No shared files mode enabled, IPC is disabled 00:05:22.796 EAL: TSC frequency is ~2400000 KHz 00:05:22.796 EAL: Main lcore 0 is ready (tid=7fd95cddba00;cpuset=[0]) 00:05:22.796 EAL: Trying to obtain current memory policy. 00:05:22.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.796 EAL: Restoring previous memory policy: 0 00:05:22.796 EAL: request: mp_malloc_sync 00:05:22.796 EAL: No shared files mode enabled, IPC is disabled 00:05:22.796 EAL: Heap on socket 0 was expanded by 2MB 00:05:22.796 EAL: No shared files mode enabled, IPC is disabled 00:05:22.796 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:22.796 EAL: Mem event callback 'spdk:(nil)' registered 00:05:22.796 00:05:22.796 00:05:22.796 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.796 http://cunit.sourceforge.net/ 00:05:22.796 00:05:22.796 00:05:22.796 Suite: components_suite 00:05:22.796 Test: vtophys_malloc_test ...passed 00:05:22.796 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:22.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.796 EAL: Restoring previous memory policy: 4 00:05:22.796 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.796 EAL: request: mp_malloc_sync 00:05:22.796 EAL: No shared files mode enabled, IPC is disabled 00:05:22.796 EAL: Heap on socket 0 was expanded by 4MB 00:05:22.796 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.796 EAL: request: mp_malloc_sync 00:05:22.796 EAL: No shared files mode enabled, IPC is disabled 00:05:22.796 EAL: Heap on socket 0 was shrunk by 4MB 00:05:22.796 EAL: Trying to obtain current memory policy. 00:05:22.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.796 EAL: Restoring previous memory policy: 4 00:05:22.796 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.796 EAL: request: mp_malloc_sync 00:05:22.796 EAL: No shared files mode enabled, IPC is disabled 00:05:22.796 EAL: Heap on socket 0 was expanded by 6MB 00:05:22.796 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.796 EAL: request: mp_malloc_sync 00:05:22.796 EAL: No shared files mode enabled, IPC is disabled 00:05:22.796 EAL: Heap on socket 0 was shrunk by 6MB 00:05:22.796 EAL: Trying to obtain current memory policy. 00:05:22.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.796 EAL: Restoring previous memory policy: 4 00:05:22.796 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.796 EAL: request: mp_malloc_sync 00:05:22.796 EAL: No shared files mode enabled, IPC is disabled 00:05:22.796 EAL: Heap on socket 0 was expanded by 10MB 00:05:22.796 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.796 EAL: request: mp_malloc_sync 00:05:22.796 EAL: No shared files mode enabled, IPC is disabled 00:05:22.796 EAL: Heap on socket 0 was shrunk by 10MB 00:05:22.796 EAL: Trying to obtain current memory policy. 00:05:22.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.796 EAL: Restoring previous memory policy: 4 00:05:22.796 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.796 EAL: request: mp_malloc_sync 00:05:22.796 EAL: No shared files mode enabled, IPC is disabled 00:05:22.796 EAL: Heap on socket 0 was expanded by 18MB 00:05:22.796 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.796 EAL: request: mp_malloc_sync 00:05:22.796 EAL: No shared files mode enabled, IPC is disabled 00:05:22.796 EAL: Heap on socket 0 was shrunk by 18MB 00:05:22.796 EAL: Trying to obtain current memory policy. 00:05:22.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.796 EAL: Restoring previous memory policy: 4 00:05:22.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.797 EAL: request: mp_malloc_sync 00:05:22.797 EAL: No shared files mode enabled, IPC is disabled 00:05:22.797 EAL: Heap on socket 0 was expanded by 34MB 00:05:22.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.797 EAL: request: mp_malloc_sync 00:05:22.797 EAL: No shared files mode enabled, IPC is disabled 00:05:22.797 EAL: Heap on socket 0 was shrunk by 34MB 00:05:22.797 EAL: Trying to obtain current memory policy. 00:05:22.797 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.797 EAL: Restoring previous memory policy: 4 00:05:22.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.797 EAL: request: mp_malloc_sync 00:05:22.797 EAL: No shared files mode enabled, IPC is disabled 00:05:22.797 EAL: Heap on socket 0 was expanded by 66MB 00:05:22.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.797 EAL: request: mp_malloc_sync 00:05:22.797 EAL: No shared files mode enabled, IPC is disabled 00:05:22.797 EAL: Heap on socket 0 was shrunk by 66MB 00:05:22.797 EAL: Trying to obtain current memory policy. 00:05:22.797 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.797 EAL: Restoring previous memory policy: 4 00:05:22.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.797 EAL: request: mp_malloc_sync 00:05:22.797 EAL: No shared files mode enabled, IPC is disabled 00:05:22.797 EAL: Heap on socket 0 was expanded by 130MB 00:05:22.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.797 EAL: request: mp_malloc_sync 00:05:22.797 EAL: No shared files mode enabled, IPC is disabled 00:05:22.797 EAL: Heap on socket 0 was shrunk by 130MB 00:05:22.797 EAL: Trying to obtain current memory policy. 00:05:22.797 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.797 EAL: Restoring previous memory policy: 4 00:05:22.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.797 EAL: request: mp_malloc_sync 00:05:22.797 EAL: No shared files mode enabled, IPC is disabled 00:05:22.797 EAL: Heap on socket 0 was expanded by 258MB 00:05:23.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.058 EAL: request: mp_malloc_sync 00:05:23.058 EAL: No shared files mode enabled, IPC is disabled 00:05:23.058 EAL: Heap on socket 0 was shrunk by 258MB 00:05:23.058 EAL: Trying to obtain current memory policy. 00:05:23.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.058 EAL: Restoring previous memory policy: 4 00:05:23.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.058 EAL: request: mp_malloc_sync 00:05:23.058 EAL: No shared files mode enabled, IPC is disabled 00:05:23.058 EAL: Heap on socket 0 was expanded by 514MB 00:05:23.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.058 EAL: request: mp_malloc_sync 00:05:23.058 EAL: No shared files mode enabled, IPC is disabled 00:05:23.058 EAL: Heap on socket 0 was shrunk by 514MB 00:05:23.058 EAL: Trying to obtain current memory policy. 00:05:23.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.319 EAL: Restoring previous memory policy: 4 00:05:23.319 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.319 EAL: request: mp_malloc_sync 00:05:23.319 EAL: No shared files mode enabled, IPC is disabled 00:05:23.319 EAL: Heap on socket 0 was expanded by 1026MB 00:05:23.319 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.581 EAL: request: mp_malloc_sync 00:05:23.581 EAL: No shared files mode enabled, IPC is disabled 00:05:23.581 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:23.581 passed 00:05:23.581 00:05:23.581 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.581 suites 1 1 n/a 0 0 00:05:23.581 tests 2 2 2 0 0 00:05:23.581 asserts 497 497 497 0 n/a 00:05:23.581 00:05:23.581 Elapsed time = 0.688 seconds 00:05:23.581 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.581 EAL: request: mp_malloc_sync 00:05:23.581 EAL: No shared files mode enabled, IPC is disabled 00:05:23.581 EAL: Heap on socket 0 was shrunk by 2MB 00:05:23.581 EAL: No shared files mode enabled, IPC is disabled 00:05:23.581 EAL: No shared files mode enabled, IPC is disabled 00:05:23.581 EAL: No shared files mode enabled, IPC is disabled 00:05:23.581 00:05:23.581 real 0m0.837s 00:05:23.581 user 0m0.458s 00:05:23.581 sys 0m0.354s 00:05:23.581 09:37:38 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.581 09:37:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:23.581 ************************************ 00:05:23.581 END TEST env_vtophys 00:05:23.581 ************************************ 00:05:23.581 09:37:38 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:23.581 09:37:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.581 09:37:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.581 09:37:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.581 ************************************ 00:05:23.581 START TEST env_pci 00:05:23.581 ************************************ 00:05:23.581 09:37:38 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:23.581 00:05:23.581 00:05:23.581 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.581 http://cunit.sourceforge.net/ 00:05:23.581 00:05:23.581 00:05:23.581 Suite: pci 00:05:23.581 Test: pci_hook ...[2024-11-27 09:37:38.937411] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3637703 has claimed it 00:05:23.581 EAL: Cannot find device (10000:00:01.0) 00:05:23.581 EAL: Failed to attach device on primary process 00:05:23.581 passed 00:05:23.581 00:05:23.581 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.581 suites 1 1 n/a 0 0 00:05:23.581 tests 1 1 1 0 0 00:05:23.581 asserts 25 25 25 0 n/a 00:05:23.581 00:05:23.581 Elapsed time = 0.030 seconds 00:05:23.581 00:05:23.581 real 0m0.051s 00:05:23.581 user 0m0.016s 00:05:23.581 sys 0m0.034s 00:05:23.581 09:37:38 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.581 09:37:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:23.581 ************************************ 00:05:23.581 END TEST env_pci 00:05:23.581 ************************************ 00:05:23.581 09:37:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:23.581 09:37:39 env -- env/env.sh@15 -- # uname 00:05:23.581 09:37:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:23.581 09:37:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:23.581 09:37:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.581 09:37:39 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:23.581 09:37:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.581 09:37:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.842 ************************************ 00:05:23.842 START TEST env_dpdk_post_init 00:05:23.842 ************************************ 00:05:23.842 09:37:39 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.842 EAL: Detected CPU lcores: 128 00:05:23.842 EAL: Detected NUMA nodes: 2 00:05:23.842 EAL: Detected shared linkage of DPDK 00:05:23.842 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.842 EAL: Selected IOVA mode 'VA' 00:05:23.842 EAL: VFIO support initialized 00:05:23.842 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.842 EAL: Using IOMMU type 1 (Type 1) 00:05:24.103 EAL: Ignore mapping IO port bar(1) 00:05:24.103 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:24.103 EAL: Ignore mapping IO port bar(1) 00:05:24.363 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:24.363 EAL: Ignore mapping IO port bar(1) 00:05:24.623 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:24.623 EAL: Ignore mapping IO port bar(1) 00:05:24.884 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:24.884 EAL: Ignore mapping IO port bar(1) 00:05:24.884 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:25.145 EAL: Ignore mapping IO port bar(1) 00:05:25.145 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:25.406 EAL: Ignore mapping IO port bar(1) 00:05:25.406 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:25.667 EAL: Ignore mapping IO port bar(1) 00:05:25.667 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:25.927 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:25.927 EAL: Ignore mapping IO port bar(1) 00:05:26.188 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:26.188 EAL: Ignore mapping IO port bar(1) 00:05:26.448 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:26.448 EAL: Ignore mapping IO port bar(1) 00:05:26.448 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:26.709 EAL: Ignore mapping IO port bar(1) 00:05:26.709 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:26.969 EAL: Ignore mapping IO port bar(1) 00:05:26.969 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:27.231 EAL: Ignore mapping IO port bar(1) 00:05:27.231 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:27.231 EAL: Ignore mapping IO port bar(1) 00:05:27.493 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:27.493 EAL: Ignore mapping IO port bar(1) 00:05:27.755 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:27.755 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:27.755 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:27.755 Starting DPDK initialization... 00:05:27.755 Starting SPDK post initialization... 00:05:27.755 SPDK NVMe probe 00:05:27.755 Attaching to 0000:65:00.0 00:05:27.755 Attached to 0000:65:00.0 00:05:27.755 Cleaning up... 00:05:29.770 00:05:29.770 real 0m5.752s 00:05:29.770 user 0m0.109s 00:05:29.770 sys 0m0.192s 00:05:29.770 09:37:44 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.770 09:37:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.770 ************************************ 00:05:29.770 END TEST env_dpdk_post_init 00:05:29.770 ************************************ 00:05:29.770 09:37:44 env -- env/env.sh@26 -- # uname 00:05:29.770 09:37:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:29.770 09:37:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.770 09:37:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.770 09:37:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.770 09:37:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.770 ************************************ 00:05:29.770 START TEST env_mem_callbacks 00:05:29.770 ************************************ 00:05:29.770 09:37:44 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.770 EAL: Detected CPU lcores: 128 00:05:29.770 EAL: Detected NUMA nodes: 2 00:05:29.770 EAL: Detected shared linkage of DPDK 00:05:29.770 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.770 EAL: Selected IOVA mode 'VA' 00:05:29.770 EAL: VFIO support initialized 00:05:29.770 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.770 00:05:29.770 00:05:29.770 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.770 http://cunit.sourceforge.net/ 00:05:29.770 00:05:29.770 00:05:29.770 Suite: memory 00:05:29.770 Test: test ... 00:05:29.770 register 0x200000200000 2097152 00:05:29.770 malloc 3145728 00:05:29.770 register 0x200000400000 4194304 00:05:29.770 buf 0x200000500000 len 3145728 PASSED 00:05:29.770 malloc 64 00:05:29.770 buf 0x2000004fff40 len 64 PASSED 00:05:29.770 malloc 4194304 00:05:29.770 register 0x200000800000 6291456 00:05:29.770 buf 0x200000a00000 len 4194304 PASSED 00:05:29.770 free 0x200000500000 3145728 00:05:29.770 free 0x2000004fff40 64 00:05:29.770 unregister 0x200000400000 4194304 PASSED 00:05:29.770 free 0x200000a00000 4194304 00:05:29.770 unregister 0x200000800000 6291456 PASSED 00:05:29.770 malloc 8388608 00:05:29.770 register 0x200000400000 10485760 00:05:29.770 buf 0x200000600000 len 8388608 PASSED 00:05:29.770 free 0x200000600000 8388608 00:05:29.770 unregister 0x200000400000 10485760 PASSED 00:05:29.770 passed 00:05:29.770 00:05:29.770 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.770 suites 1 1 n/a 0 0 00:05:29.770 tests 1 1 1 0 0 00:05:29.770 asserts 15 15 15 0 n/a 00:05:29.770 00:05:29.771 Elapsed time = 0.010 seconds 00:05:29.771 00:05:29.771 real 0m0.068s 00:05:29.771 user 0m0.017s 00:05:29.771 sys 0m0.052s 00:05:29.771 09:37:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.771 09:37:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:29.771 ************************************ 00:05:29.771 END TEST env_mem_callbacks 00:05:29.771 ************************************ 00:05:29.771 00:05:29.771 real 0m7.536s 00:05:29.771 user 0m1.037s 00:05:29.771 sys 0m1.058s 00:05:29.771 09:37:45 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.771 09:37:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.771 ************************************ 00:05:29.771 END TEST env 00:05:29.771 ************************************ 00:05:29.771 09:37:45 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:29.771 09:37:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.771 09:37:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.771 09:37:45 -- common/autotest_common.sh@10 -- # set +x 00:05:29.771 ************************************ 00:05:29.771 START TEST rpc 00:05:29.771 ************************************ 00:05:29.771 09:37:45 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:29.771 * Looking for test storage... 00:05:29.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:29.771 09:37:45 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.771 09:37:45 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.771 09:37:45 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.033 09:37:45 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.033 09:37:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.033 09:37:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.033 09:37:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.033 09:37:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.033 09:37:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.033 09:37:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.033 09:37:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.033 09:37:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.033 09:37:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.033 09:37:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.033 09:37:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.033 09:37:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:30.033 09:37:45 rpc -- scripts/common.sh@345 -- # : 1 00:05:30.033 09:37:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.033 09:37:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.033 09:37:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:30.033 09:37:45 rpc -- scripts/common.sh@353 -- # local d=1 00:05:30.033 09:37:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.033 09:37:45 rpc -- scripts/common.sh@355 -- # echo 1 00:05:30.033 09:37:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.033 09:37:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:30.033 09:37:45 rpc -- scripts/common.sh@353 -- # local d=2 00:05:30.033 09:37:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.033 09:37:45 rpc -- scripts/common.sh@355 -- # echo 2 00:05:30.033 09:37:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.033 09:37:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.033 09:37:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.033 09:37:45 rpc -- scripts/common.sh@368 -- # return 0 00:05:30.033 09:37:45 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.033 09:37:45 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.033 --rc genhtml_branch_coverage=1 00:05:30.033 --rc genhtml_function_coverage=1 00:05:30.033 --rc genhtml_legend=1 00:05:30.033 --rc geninfo_all_blocks=1 00:05:30.033 --rc geninfo_unexecuted_blocks=1 00:05:30.033 00:05:30.033 ' 00:05:30.033 09:37:45 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.033 --rc genhtml_branch_coverage=1 00:05:30.033 --rc genhtml_function_coverage=1 00:05:30.033 --rc genhtml_legend=1 00:05:30.033 --rc geninfo_all_blocks=1 00:05:30.033 --rc geninfo_unexecuted_blocks=1 00:05:30.033 00:05:30.033 ' 00:05:30.033 09:37:45 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.033 --rc genhtml_branch_coverage=1 00:05:30.033 --rc genhtml_function_coverage=1 00:05:30.033 --rc genhtml_legend=1 00:05:30.033 --rc geninfo_all_blocks=1 00:05:30.033 --rc geninfo_unexecuted_blocks=1 00:05:30.033 00:05:30.033 ' 00:05:30.033 09:37:45 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.033 --rc genhtml_branch_coverage=1 00:05:30.033 --rc genhtml_function_coverage=1 00:05:30.033 --rc genhtml_legend=1 00:05:30.033 --rc geninfo_all_blocks=1 00:05:30.033 --rc geninfo_unexecuted_blocks=1 00:05:30.033 00:05:30.033 ' 00:05:30.033 09:37:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3639021 00:05:30.033 09:37:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.033 09:37:45 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:30.033 09:37:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3639021 00:05:30.033 09:37:45 rpc -- common/autotest_common.sh@835 -- # '[' -z 3639021 ']' 00:05:30.033 09:37:45 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.033 09:37:45 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.033 09:37:45 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.033 09:37:45 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.033 09:37:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.033 [2024-11-27 09:37:45.350937] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:05:30.033 [2024-11-27 09:37:45.351003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639021 ] 00:05:30.033 [2024-11-27 09:37:45.442468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.033 [2024-11-27 09:37:45.495051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:30.033 [2024-11-27 09:37:45.495106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3639021' to capture a snapshot of events at runtime. 00:05:30.033 [2024-11-27 09:37:45.495115] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:30.033 [2024-11-27 09:37:45.495122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:30.033 [2024-11-27 09:37:45.495128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3639021 for offline analysis/debug. 00:05:30.033 [2024-11-27 09:37:45.495901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.978 09:37:46 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.978 09:37:46 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.978 09:37:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:30.978 09:37:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:30.978 09:37:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.978 09:37:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.978 09:37:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.978 09:37:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.978 09:37:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.978 ************************************ 00:05:30.978 START TEST rpc_integrity 00:05:30.978 ************************************ 00:05:30.978 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:30.978 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.978 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.978 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.978 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.978 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.978 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.978 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.978 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.978 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.978 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.978 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.978 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.978 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.978 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.978 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.978 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.978 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.978 { 00:05:30.978 "name": "Malloc0", 00:05:30.978 "aliases": [ 00:05:30.978 "af434132-98e5-4da4-8e66-0a92d45f2bda" 00:05:30.978 ], 00:05:30.978 "product_name": "Malloc disk", 00:05:30.978 "block_size": 512, 00:05:30.978 "num_blocks": 16384, 00:05:30.978 "uuid": "af434132-98e5-4da4-8e66-0a92d45f2bda", 00:05:30.978 "assigned_rate_limits": { 00:05:30.978 "rw_ios_per_sec": 0, 00:05:30.978 "rw_mbytes_per_sec": 0, 00:05:30.978 "r_mbytes_per_sec": 0, 00:05:30.978 "w_mbytes_per_sec": 0 00:05:30.978 }, 00:05:30.978 "claimed": false, 00:05:30.978 "zoned": false, 00:05:30.978 "supported_io_types": { 00:05:30.978 "read": true, 00:05:30.978 "write": true, 00:05:30.978 "unmap": true, 00:05:30.978 "flush": true, 00:05:30.978 "reset": true, 00:05:30.978 "nvme_admin": false, 00:05:30.978 "nvme_io": false, 00:05:30.978 "nvme_io_md": false, 00:05:30.978 "write_zeroes": true, 00:05:30.978 "zcopy": true, 00:05:30.978 "get_zone_info": false, 00:05:30.978 "zone_management": false, 00:05:30.978 "zone_append": false, 00:05:30.978 "compare": false, 00:05:30.978 "compare_and_write": false, 00:05:30.978 "abort": true, 00:05:30.978 "seek_hole": false, 00:05:30.978 "seek_data": false, 00:05:30.978 "copy": true, 00:05:30.978 "nvme_iov_md": false 00:05:30.978 }, 00:05:30.979 "memory_domains": [ 00:05:30.979 { 00:05:30.979 "dma_device_id": "system", 00:05:30.979 "dma_device_type": 1 00:05:30.979 }, 00:05:30.979 { 00:05:30.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.979 "dma_device_type": 2 00:05:30.979 } 00:05:30.979 ], 00:05:30.979 "driver_specific": {} 00:05:30.979 } 00:05:30.979 ]' 00:05:30.979 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.979 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.979 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.979 [2024-11-27 09:37:46.325397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.979 [2024-11-27 09:37:46.325441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.979 [2024-11-27 09:37:46.325458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2484db0 00:05:30.979 [2024-11-27 09:37:46.325466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.979 [2024-11-27 09:37:46.326985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.979 [2024-11-27 09:37:46.327020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.979 Passthru0 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.979 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.979 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.979 { 00:05:30.979 "name": "Malloc0", 00:05:30.979 "aliases": [ 00:05:30.979 "af434132-98e5-4da4-8e66-0a92d45f2bda" 00:05:30.979 ], 00:05:30.979 "product_name": "Malloc disk", 00:05:30.979 "block_size": 512, 00:05:30.979 "num_blocks": 16384, 00:05:30.979 "uuid": "af434132-98e5-4da4-8e66-0a92d45f2bda", 00:05:30.979 "assigned_rate_limits": { 00:05:30.979 "rw_ios_per_sec": 0, 00:05:30.979 "rw_mbytes_per_sec": 0, 00:05:30.979 "r_mbytes_per_sec": 0, 00:05:30.979 "w_mbytes_per_sec": 0 00:05:30.979 }, 00:05:30.979 "claimed": true, 00:05:30.979 "claim_type": "exclusive_write", 00:05:30.979 "zoned": false, 00:05:30.979 "supported_io_types": { 00:05:30.979 "read": true, 00:05:30.979 "write": true, 00:05:30.979 "unmap": true, 00:05:30.979 "flush": true, 00:05:30.979 "reset": true, 00:05:30.979 "nvme_admin": false, 00:05:30.979 "nvme_io": false, 00:05:30.979 "nvme_io_md": false, 00:05:30.979 "write_zeroes": true, 00:05:30.979 "zcopy": true, 00:05:30.979 "get_zone_info": false, 00:05:30.979 "zone_management": false, 00:05:30.979 "zone_append": false, 00:05:30.979 "compare": false, 00:05:30.979 "compare_and_write": false, 00:05:30.979 "abort": true, 00:05:30.979 "seek_hole": false, 00:05:30.979 "seek_data": false, 00:05:30.979 "copy": true, 00:05:30.979 "nvme_iov_md": false 00:05:30.979 }, 00:05:30.979 "memory_domains": [ 00:05:30.979 { 00:05:30.979 "dma_device_id": "system", 00:05:30.979 "dma_device_type": 1 00:05:30.979 }, 00:05:30.979 { 00:05:30.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.979 "dma_device_type": 2 00:05:30.979 } 00:05:30.979 ], 00:05:30.979 "driver_specific": {} 00:05:30.979 }, 00:05:30.979 { 00:05:30.979 "name": "Passthru0", 00:05:30.979 "aliases": [ 00:05:30.979 "b76dfccf-298d-54ac-89dd-26507189e0bc" 00:05:30.979 ], 00:05:30.979 "product_name": "passthru", 00:05:30.979 "block_size": 512, 00:05:30.979 "num_blocks": 16384, 00:05:30.979 "uuid": "b76dfccf-298d-54ac-89dd-26507189e0bc", 00:05:30.979 "assigned_rate_limits": { 00:05:30.979 "rw_ios_per_sec": 0, 00:05:30.979 "rw_mbytes_per_sec": 0, 00:05:30.979 "r_mbytes_per_sec": 0, 00:05:30.979 "w_mbytes_per_sec": 0 00:05:30.979 }, 00:05:30.979 "claimed": false, 00:05:30.979 "zoned": false, 00:05:30.979 "supported_io_types": { 00:05:30.979 "read": true, 00:05:30.979 "write": true, 00:05:30.979 "unmap": true, 00:05:30.979 "flush": true, 00:05:30.979 "reset": true, 00:05:30.979 "nvme_admin": false, 00:05:30.979 "nvme_io": false, 00:05:30.979 "nvme_io_md": false, 00:05:30.979 "write_zeroes": true, 00:05:30.979 "zcopy": true, 00:05:30.979 "get_zone_info": false, 00:05:30.979 "zone_management": false, 00:05:30.979 "zone_append": false, 00:05:30.979 "compare": false, 00:05:30.979 "compare_and_write": false, 00:05:30.979 "abort": true, 00:05:30.979 "seek_hole": false, 00:05:30.979 "seek_data": false, 00:05:30.979 "copy": true, 00:05:30.979 "nvme_iov_md": false 00:05:30.979 }, 00:05:30.979 "memory_domains": [ 00:05:30.979 { 00:05:30.979 "dma_device_id": "system", 00:05:30.979 "dma_device_type": 1 00:05:30.979 }, 00:05:30.979 { 00:05:30.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.979 "dma_device_type": 2 00:05:30.979 } 00:05:30.979 ], 00:05:30.979 "driver_specific": { 00:05:30.979 "passthru": { 00:05:30.979 "name": "Passthru0", 00:05:30.979 "base_bdev_name": "Malloc0" 00:05:30.979 } 00:05:30.979 } 00:05:30.979 } 00:05:30.979 ]' 00:05:30.979 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.979 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.979 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.979 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.979 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.979 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.979 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.979 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.241 09:37:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.241 00:05:31.241 real 0m0.299s 00:05:31.241 user 0m0.184s 00:05:31.241 sys 0m0.047s 00:05:31.241 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.241 09:37:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.241 ************************************ 00:05:31.241 END TEST rpc_integrity 00:05:31.241 ************************************ 00:05:31.241 09:37:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:31.241 09:37:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.241 09:37:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.241 09:37:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.241 ************************************ 00:05:31.241 START TEST rpc_plugins 00:05:31.241 ************************************ 00:05:31.241 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:31.241 09:37:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:31.241 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.241 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.241 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.241 09:37:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:31.241 09:37:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:31.241 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.241 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.241 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.241 09:37:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:31.241 { 00:05:31.241 "name": "Malloc1", 00:05:31.241 "aliases": [ 00:05:31.241 "fcb9c437-e8c9-4839-846b-09af50bce548" 00:05:31.241 ], 00:05:31.241 "product_name": "Malloc disk", 00:05:31.241 "block_size": 4096, 00:05:31.241 "num_blocks": 256, 00:05:31.241 "uuid": "fcb9c437-e8c9-4839-846b-09af50bce548", 00:05:31.241 "assigned_rate_limits": { 00:05:31.241 "rw_ios_per_sec": 0, 00:05:31.241 "rw_mbytes_per_sec": 0, 00:05:31.241 "r_mbytes_per_sec": 0, 00:05:31.241 "w_mbytes_per_sec": 0 00:05:31.241 }, 00:05:31.241 "claimed": false, 00:05:31.241 "zoned": false, 00:05:31.241 "supported_io_types": { 00:05:31.241 "read": true, 00:05:31.241 "write": true, 00:05:31.241 "unmap": true, 00:05:31.241 "flush": true, 00:05:31.241 "reset": true, 00:05:31.241 "nvme_admin": false, 00:05:31.241 "nvme_io": false, 00:05:31.241 "nvme_io_md": false, 00:05:31.241 "write_zeroes": true, 00:05:31.241 "zcopy": true, 00:05:31.241 "get_zone_info": false, 00:05:31.241 "zone_management": false, 00:05:31.241 "zone_append": false, 00:05:31.241 "compare": false, 00:05:31.241 "compare_and_write": false, 00:05:31.241 "abort": true, 00:05:31.241 "seek_hole": false, 00:05:31.241 "seek_data": false, 00:05:31.241 "copy": true, 00:05:31.241 "nvme_iov_md": false 00:05:31.241 }, 00:05:31.241 "memory_domains": [ 00:05:31.241 { 00:05:31.241 "dma_device_id": "system", 00:05:31.241 "dma_device_type": 1 00:05:31.241 }, 00:05:31.241 { 00:05:31.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.241 "dma_device_type": 2 00:05:31.241 } 00:05:31.241 ], 00:05:31.241 "driver_specific": {} 00:05:31.241 } 00:05:31.241 ]' 00:05:31.241 09:37:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:31.241 09:37:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:31.241 09:37:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:31.241 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.241 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.241 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.241 09:37:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:31.241 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.241 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.241 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.241 09:37:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:31.241 09:37:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:31.502 09:37:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:31.502 00:05:31.502 real 0m0.152s 00:05:31.502 user 0m0.093s 00:05:31.502 sys 0m0.022s 00:05:31.502 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.502 09:37:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.502 ************************************ 00:05:31.502 END TEST rpc_plugins 00:05:31.502 ************************************ 00:05:31.502 09:37:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:31.502 09:37:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.502 09:37:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.502 09:37:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.502 ************************************ 00:05:31.502 START TEST rpc_trace_cmd_test 00:05:31.502 ************************************ 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:31.502 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3639021", 00:05:31.502 "tpoint_group_mask": "0x8", 00:05:31.502 "iscsi_conn": { 00:05:31.502 "mask": "0x2", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "scsi": { 00:05:31.502 "mask": "0x4", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "bdev": { 00:05:31.502 "mask": "0x8", 00:05:31.502 "tpoint_mask": "0xffffffffffffffff" 00:05:31.502 }, 00:05:31.502 "nvmf_rdma": { 00:05:31.502 "mask": "0x10", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "nvmf_tcp": { 00:05:31.502 "mask": "0x20", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "ftl": { 00:05:31.502 "mask": "0x40", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "blobfs": { 00:05:31.502 "mask": "0x80", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "dsa": { 00:05:31.502 "mask": "0x200", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "thread": { 00:05:31.502 "mask": "0x400", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "nvme_pcie": { 00:05:31.502 "mask": "0x800", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "iaa": { 00:05:31.502 "mask": "0x1000", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "nvme_tcp": { 00:05:31.502 "mask": "0x2000", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "bdev_nvme": { 00:05:31.502 "mask": "0x4000", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "sock": { 00:05:31.502 "mask": "0x8000", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "blob": { 00:05:31.502 "mask": "0x10000", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "bdev_raid": { 00:05:31.502 "mask": "0x20000", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 }, 00:05:31.502 "scheduler": { 00:05:31.502 "mask": "0x40000", 00:05:31.502 "tpoint_mask": "0x0" 00:05:31.502 } 00:05:31.502 }' 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:31.502 09:37:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:31.764 09:37:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:31.764 09:37:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:31.764 09:37:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:31.764 00:05:31.764 real 0m0.255s 00:05:31.764 user 0m0.198s 00:05:31.764 sys 0m0.047s 00:05:31.764 09:37:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.764 09:37:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.764 ************************************ 00:05:31.764 END TEST rpc_trace_cmd_test 00:05:31.764 ************************************ 00:05:31.764 09:37:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:31.764 09:37:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:31.764 09:37:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:31.764 09:37:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.764 09:37:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.764 09:37:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.764 ************************************ 00:05:31.764 START TEST rpc_daemon_integrity 00:05:31.764 ************************************ 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.764 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.025 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.025 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.025 { 00:05:32.025 "name": "Malloc2", 00:05:32.025 "aliases": [ 00:05:32.025 "e4426a50-a0b1-456e-97cb-b86580feab73" 00:05:32.025 ], 00:05:32.025 "product_name": "Malloc disk", 00:05:32.026 "block_size": 512, 00:05:32.026 "num_blocks": 16384, 00:05:32.026 "uuid": "e4426a50-a0b1-456e-97cb-b86580feab73", 00:05:32.026 "assigned_rate_limits": { 00:05:32.026 "rw_ios_per_sec": 0, 00:05:32.026 "rw_mbytes_per_sec": 0, 00:05:32.026 "r_mbytes_per_sec": 0, 00:05:32.026 "w_mbytes_per_sec": 0 00:05:32.026 }, 00:05:32.026 "claimed": false, 00:05:32.026 "zoned": false, 00:05:32.026 "supported_io_types": { 00:05:32.026 "read": true, 00:05:32.026 "write": true, 00:05:32.026 "unmap": true, 00:05:32.026 "flush": true, 00:05:32.026 "reset": true, 00:05:32.026 "nvme_admin": false, 00:05:32.026 "nvme_io": false, 00:05:32.026 "nvme_io_md": false, 00:05:32.026 "write_zeroes": true, 00:05:32.026 "zcopy": true, 00:05:32.026 "get_zone_info": false, 00:05:32.026 "zone_management": false, 00:05:32.026 "zone_append": false, 00:05:32.026 "compare": false, 00:05:32.026 "compare_and_write": false, 00:05:32.026 "abort": true, 00:05:32.026 "seek_hole": false, 00:05:32.026 "seek_data": false, 00:05:32.026 "copy": true, 00:05:32.026 "nvme_iov_md": false 00:05:32.026 }, 00:05:32.026 "memory_domains": [ 00:05:32.026 { 00:05:32.026 "dma_device_id": "system", 00:05:32.026 "dma_device_type": 1 00:05:32.026 }, 00:05:32.026 { 00:05:32.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.026 "dma_device_type": 2 00:05:32.026 } 00:05:32.026 ], 00:05:32.026 "driver_specific": {} 00:05:32.026 } 00:05:32.026 ]' 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.026 [2024-11-27 09:37:47.288011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:32.026 [2024-11-27 09:37:47.288054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.026 [2024-11-27 09:37:47.288069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b58d0 00:05:32.026 [2024-11-27 09:37:47.288076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.026 [2024-11-27 09:37:47.289534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.026 [2024-11-27 09:37:47.289568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:32.026 Passthru0 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:32.026 { 00:05:32.026 "name": "Malloc2", 00:05:32.026 "aliases": [ 00:05:32.026 "e4426a50-a0b1-456e-97cb-b86580feab73" 00:05:32.026 ], 00:05:32.026 "product_name": "Malloc disk", 00:05:32.026 "block_size": 512, 00:05:32.026 "num_blocks": 16384, 00:05:32.026 "uuid": "e4426a50-a0b1-456e-97cb-b86580feab73", 00:05:32.026 "assigned_rate_limits": { 00:05:32.026 "rw_ios_per_sec": 0, 00:05:32.026 "rw_mbytes_per_sec": 0, 00:05:32.026 "r_mbytes_per_sec": 0, 00:05:32.026 "w_mbytes_per_sec": 0 00:05:32.026 }, 00:05:32.026 "claimed": true, 00:05:32.026 "claim_type": "exclusive_write", 00:05:32.026 "zoned": false, 00:05:32.026 "supported_io_types": { 00:05:32.026 "read": true, 00:05:32.026 "write": true, 00:05:32.026 "unmap": true, 00:05:32.026 "flush": true, 00:05:32.026 "reset": true, 00:05:32.026 "nvme_admin": false, 00:05:32.026 "nvme_io": false, 00:05:32.026 "nvme_io_md": false, 00:05:32.026 "write_zeroes": true, 00:05:32.026 "zcopy": true, 00:05:32.026 "get_zone_info": false, 00:05:32.026 "zone_management": false, 00:05:32.026 "zone_append": false, 00:05:32.026 "compare": false, 00:05:32.026 "compare_and_write": false, 00:05:32.026 "abort": true, 00:05:32.026 "seek_hole": false, 00:05:32.026 "seek_data": false, 00:05:32.026 "copy": true, 00:05:32.026 "nvme_iov_md": false 00:05:32.026 }, 00:05:32.026 "memory_domains": [ 00:05:32.026 { 00:05:32.026 "dma_device_id": "system", 00:05:32.026 "dma_device_type": 1 00:05:32.026 }, 00:05:32.026 { 00:05:32.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.026 "dma_device_type": 2 00:05:32.026 } 00:05:32.026 ], 00:05:32.026 "driver_specific": {} 00:05:32.026 }, 00:05:32.026 { 00:05:32.026 "name": "Passthru0", 00:05:32.026 "aliases": [ 00:05:32.026 "04d5a9b7-42d6-5880-b3db-a0e48c0dca9d" 00:05:32.026 ], 00:05:32.026 "product_name": "passthru", 00:05:32.026 "block_size": 512, 00:05:32.026 "num_blocks": 16384, 00:05:32.026 "uuid": "04d5a9b7-42d6-5880-b3db-a0e48c0dca9d", 00:05:32.026 "assigned_rate_limits": { 00:05:32.026 "rw_ios_per_sec": 0, 00:05:32.026 "rw_mbytes_per_sec": 0, 00:05:32.026 "r_mbytes_per_sec": 0, 00:05:32.026 "w_mbytes_per_sec": 0 00:05:32.026 }, 00:05:32.026 "claimed": false, 00:05:32.026 "zoned": false, 00:05:32.026 "supported_io_types": { 00:05:32.026 "read": true, 00:05:32.026 "write": true, 00:05:32.026 "unmap": true, 00:05:32.026 "flush": true, 00:05:32.026 "reset": true, 00:05:32.026 "nvme_admin": false, 00:05:32.026 "nvme_io": false, 00:05:32.026 "nvme_io_md": false, 00:05:32.026 "write_zeroes": true, 00:05:32.026 "zcopy": true, 00:05:32.026 "get_zone_info": false, 00:05:32.026 "zone_management": false, 00:05:32.026 "zone_append": false, 00:05:32.026 "compare": false, 00:05:32.026 "compare_and_write": false, 00:05:32.026 "abort": true, 00:05:32.026 "seek_hole": false, 00:05:32.026 "seek_data": false, 00:05:32.026 "copy": true, 00:05:32.026 "nvme_iov_md": false 00:05:32.026 }, 00:05:32.026 "memory_domains": [ 00:05:32.026 { 00:05:32.026 "dma_device_id": "system", 00:05:32.026 "dma_device_type": 1 00:05:32.026 }, 00:05:32.026 { 00:05:32.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.026 "dma_device_type": 2 00:05:32.026 } 00:05:32.026 ], 00:05:32.026 "driver_specific": { 00:05:32.026 "passthru": { 00:05:32.026 "name": "Passthru0", 00:05:32.026 "base_bdev_name": "Malloc2" 00:05:32.026 } 00:05:32.026 } 00:05:32.026 } 00:05:32.026 ]' 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.026 00:05:32.026 real 0m0.310s 00:05:32.026 user 0m0.184s 00:05:32.026 sys 0m0.054s 00:05:32.026 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.027 09:37:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.027 ************************************ 00:05:32.027 END TEST rpc_daemon_integrity 00:05:32.027 ************************************ 00:05:32.027 09:37:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:32.027 09:37:47 rpc -- rpc/rpc.sh@84 -- # killprocess 3639021 00:05:32.027 09:37:47 rpc -- common/autotest_common.sh@954 -- # '[' -z 3639021 ']' 00:05:32.027 09:37:47 rpc -- common/autotest_common.sh@958 -- # kill -0 3639021 00:05:32.027 09:37:47 rpc -- common/autotest_common.sh@959 -- # uname 00:05:32.287 09:37:47 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.287 09:37:47 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3639021 00:05:32.287 09:37:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.287 09:37:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.287 09:37:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3639021' 00:05:32.287 killing process with pid 3639021 00:05:32.287 09:37:47 rpc -- common/autotest_common.sh@973 -- # kill 3639021 00:05:32.287 09:37:47 rpc -- common/autotest_common.sh@978 -- # wait 3639021 00:05:32.548 00:05:32.548 real 0m2.702s 00:05:32.548 user 0m3.438s 00:05:32.548 sys 0m0.837s 00:05:32.548 09:37:47 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.548 09:37:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.548 ************************************ 00:05:32.548 END TEST rpc 00:05:32.548 ************************************ 00:05:32.548 09:37:47 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:32.548 09:37:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.548 09:37:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.548 09:37:47 -- common/autotest_common.sh@10 -- # set +x 00:05:32.548 ************************************ 00:05:32.548 START TEST skip_rpc 00:05:32.548 ************************************ 00:05:32.548 09:37:47 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:32.548 * Looking for test storage... 00:05:32.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.548 09:37:47 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.548 09:37:47 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.548 09:37:47 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.809 09:37:48 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.809 09:37:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:32.809 09:37:48 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.809 09:37:48 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.809 --rc genhtml_branch_coverage=1 00:05:32.809 --rc genhtml_function_coverage=1 00:05:32.809 --rc genhtml_legend=1 00:05:32.809 --rc geninfo_all_blocks=1 00:05:32.809 --rc geninfo_unexecuted_blocks=1 00:05:32.809 00:05:32.809 ' 00:05:32.809 09:37:48 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.809 --rc genhtml_branch_coverage=1 00:05:32.809 --rc genhtml_function_coverage=1 00:05:32.809 --rc genhtml_legend=1 00:05:32.809 --rc geninfo_all_blocks=1 00:05:32.809 --rc geninfo_unexecuted_blocks=1 00:05:32.809 00:05:32.809 ' 00:05:32.809 09:37:48 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.809 --rc genhtml_branch_coverage=1 00:05:32.809 --rc genhtml_function_coverage=1 00:05:32.809 --rc genhtml_legend=1 00:05:32.809 --rc geninfo_all_blocks=1 00:05:32.809 --rc geninfo_unexecuted_blocks=1 00:05:32.809 00:05:32.809 ' 00:05:32.809 09:37:48 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.809 --rc genhtml_branch_coverage=1 00:05:32.809 --rc genhtml_function_coverage=1 00:05:32.809 --rc genhtml_legend=1 00:05:32.809 --rc geninfo_all_blocks=1 00:05:32.809 --rc geninfo_unexecuted_blocks=1 00:05:32.809 00:05:32.809 ' 00:05:32.809 09:37:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:32.809 09:37:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:32.809 09:37:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:32.809 09:37:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.809 09:37:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.809 09:37:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.809 ************************************ 00:05:32.809 START TEST skip_rpc 00:05:32.809 ************************************ 00:05:32.809 09:37:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:32.809 09:37:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3639870 00:05:32.809 09:37:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.809 09:37:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:32.809 09:37:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:32.809 [2024-11-27 09:37:48.173346] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:05:32.809 [2024-11-27 09:37:48.173406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639870 ] 00:05:32.809 [2024-11-27 09:37:48.264966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.069 [2024-11-27 09:37:48.317827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3639870 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3639870 ']' 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3639870 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3639870 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3639870' 00:05:38.358 killing process with pid 3639870 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3639870 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3639870 00:05:38.358 00:05:38.358 real 0m5.265s 00:05:38.358 user 0m5.021s 00:05:38.358 sys 0m0.292s 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.358 09:37:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.358 ************************************ 00:05:38.358 END TEST skip_rpc 00:05:38.358 ************************************ 00:05:38.358 09:37:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:38.358 09:37:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.358 09:37:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.358 09:37:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.358 ************************************ 00:05:38.358 START TEST skip_rpc_with_json 00:05:38.358 ************************************ 00:05:38.358 09:37:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:38.358 09:37:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:38.358 09:37:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3640910 00:05:38.358 09:37:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.358 09:37:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3640910 00:05:38.359 09:37:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.359 09:37:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3640910 ']' 00:05:38.359 09:37:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.359 09:37:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.359 09:37:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.359 09:37:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.359 09:37:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.359 [2024-11-27 09:37:53.514122] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:05:38.359 [2024-11-27 09:37:53.514187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640910 ] 00:05:38.359 [2024-11-27 09:37:53.597441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.359 [2024-11-27 09:37:53.629706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.931 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.931 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:38.931 09:37:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:38.931 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.931 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.931 [2024-11-27 09:37:54.297621] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:38.931 request: 00:05:38.931 { 00:05:38.931 "trtype": "tcp", 00:05:38.931 "method": "nvmf_get_transports", 00:05:38.931 "req_id": 1 00:05:38.931 } 00:05:38.931 Got JSON-RPC error response 00:05:38.931 response: 00:05:38.931 { 00:05:38.931 "code": -19, 00:05:38.931 "message": "No such device" 00:05:38.931 } 00:05:38.931 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:38.931 09:37:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:38.931 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.931 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.931 [2024-11-27 09:37:54.309722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.931 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.931 09:37:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:38.931 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.931 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.192 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.192 09:37:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:39.192 { 00:05:39.192 "subsystems": [ 00:05:39.192 { 00:05:39.192 "subsystem": "fsdev", 00:05:39.192 "config": [ 00:05:39.192 { 00:05:39.192 "method": "fsdev_set_opts", 00:05:39.192 "params": { 00:05:39.192 "fsdev_io_pool_size": 65535, 00:05:39.192 "fsdev_io_cache_size": 256 00:05:39.192 } 00:05:39.192 } 00:05:39.192 ] 00:05:39.192 }, 00:05:39.192 { 00:05:39.192 "subsystem": "vfio_user_target", 00:05:39.192 "config": null 00:05:39.192 }, 00:05:39.192 { 00:05:39.192 "subsystem": "keyring", 00:05:39.192 "config": [] 00:05:39.192 }, 00:05:39.192 { 00:05:39.192 "subsystem": "iobuf", 00:05:39.192 "config": [ 00:05:39.192 { 00:05:39.192 "method": "iobuf_set_options", 00:05:39.192 "params": { 00:05:39.192 "small_pool_count": 8192, 00:05:39.192 "large_pool_count": 1024, 00:05:39.192 "small_bufsize": 8192, 00:05:39.192 "large_bufsize": 135168, 00:05:39.192 "enable_numa": false 00:05:39.192 } 00:05:39.192 } 00:05:39.192 ] 00:05:39.192 }, 00:05:39.192 { 00:05:39.192 "subsystem": "sock", 00:05:39.192 "config": [ 00:05:39.192 { 00:05:39.192 "method": "sock_set_default_impl", 00:05:39.192 "params": { 00:05:39.192 "impl_name": "posix" 00:05:39.192 } 00:05:39.192 }, 00:05:39.192 { 00:05:39.192 "method": "sock_impl_set_options", 00:05:39.192 "params": { 00:05:39.192 "impl_name": "ssl", 00:05:39.192 "recv_buf_size": 4096, 00:05:39.192 "send_buf_size": 4096, 00:05:39.192 "enable_recv_pipe": true, 00:05:39.192 "enable_quickack": false, 00:05:39.192 "enable_placement_id": 0, 00:05:39.192 "enable_zerocopy_send_server": true, 00:05:39.192 "enable_zerocopy_send_client": false, 00:05:39.192 "zerocopy_threshold": 0, 00:05:39.192 "tls_version": 0, 00:05:39.192 "enable_ktls": false 00:05:39.192 } 00:05:39.192 }, 00:05:39.192 { 00:05:39.192 "method": "sock_impl_set_options", 00:05:39.192 "params": { 00:05:39.192 "impl_name": "posix", 00:05:39.192 "recv_buf_size": 2097152, 00:05:39.192 "send_buf_size": 2097152, 00:05:39.192 "enable_recv_pipe": true, 00:05:39.192 "enable_quickack": false, 00:05:39.192 "enable_placement_id": 0, 00:05:39.193 "enable_zerocopy_send_server": true, 00:05:39.193 "enable_zerocopy_send_client": false, 00:05:39.193 "zerocopy_threshold": 0, 00:05:39.193 "tls_version": 0, 00:05:39.193 "enable_ktls": false 00:05:39.193 } 00:05:39.193 } 00:05:39.193 ] 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "subsystem": "vmd", 00:05:39.193 "config": [] 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "subsystem": "accel", 00:05:39.193 "config": [ 00:05:39.193 { 00:05:39.193 "method": "accel_set_options", 00:05:39.193 "params": { 00:05:39.193 "small_cache_size": 128, 00:05:39.193 "large_cache_size": 16, 00:05:39.193 "task_count": 2048, 00:05:39.193 "sequence_count": 2048, 00:05:39.193 "buf_count": 2048 00:05:39.193 } 00:05:39.193 } 00:05:39.193 ] 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "subsystem": "bdev", 00:05:39.193 "config": [ 00:05:39.193 { 00:05:39.193 "method": "bdev_set_options", 00:05:39.193 "params": { 00:05:39.193 "bdev_io_pool_size": 65535, 00:05:39.193 "bdev_io_cache_size": 256, 00:05:39.193 "bdev_auto_examine": true, 00:05:39.193 "iobuf_small_cache_size": 128, 00:05:39.193 "iobuf_large_cache_size": 16 00:05:39.193 } 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "method": "bdev_raid_set_options", 00:05:39.193 "params": { 00:05:39.193 "process_window_size_kb": 1024, 00:05:39.193 "process_max_bandwidth_mb_sec": 0 00:05:39.193 } 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "method": "bdev_iscsi_set_options", 00:05:39.193 "params": { 00:05:39.193 "timeout_sec": 30 00:05:39.193 } 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "method": "bdev_nvme_set_options", 00:05:39.193 "params": { 00:05:39.193 "action_on_timeout": "none", 00:05:39.193 "timeout_us": 0, 00:05:39.193 "timeout_admin_us": 0, 00:05:39.193 "keep_alive_timeout_ms": 10000, 00:05:39.193 "arbitration_burst": 0, 00:05:39.193 "low_priority_weight": 0, 00:05:39.193 "medium_priority_weight": 0, 00:05:39.193 "high_priority_weight": 0, 00:05:39.193 "nvme_adminq_poll_period_us": 10000, 00:05:39.193 "nvme_ioq_poll_period_us": 0, 00:05:39.193 "io_queue_requests": 0, 00:05:39.193 "delay_cmd_submit": true, 00:05:39.193 "transport_retry_count": 4, 00:05:39.193 "bdev_retry_count": 3, 00:05:39.193 "transport_ack_timeout": 0, 00:05:39.193 "ctrlr_loss_timeout_sec": 0, 00:05:39.193 "reconnect_delay_sec": 0, 00:05:39.193 "fast_io_fail_timeout_sec": 0, 00:05:39.193 "disable_auto_failback": false, 00:05:39.193 "generate_uuids": false, 00:05:39.193 "transport_tos": 0, 00:05:39.193 "nvme_error_stat": false, 00:05:39.193 "rdma_srq_size": 0, 00:05:39.193 "io_path_stat": false, 00:05:39.193 "allow_accel_sequence": false, 00:05:39.193 "rdma_max_cq_size": 0, 00:05:39.193 "rdma_cm_event_timeout_ms": 0, 00:05:39.193 "dhchap_digests": [ 00:05:39.193 "sha256", 00:05:39.193 "sha384", 00:05:39.193 "sha512" 00:05:39.193 ], 00:05:39.193 "dhchap_dhgroups": [ 00:05:39.193 "null", 00:05:39.193 "ffdhe2048", 00:05:39.193 "ffdhe3072", 00:05:39.193 "ffdhe4096", 00:05:39.193 "ffdhe6144", 00:05:39.193 "ffdhe8192" 00:05:39.193 ] 00:05:39.193 } 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "method": "bdev_nvme_set_hotplug", 00:05:39.193 "params": { 00:05:39.193 "period_us": 100000, 00:05:39.193 "enable": false 00:05:39.193 } 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "method": "bdev_wait_for_examine" 00:05:39.193 } 00:05:39.193 ] 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "subsystem": "scsi", 00:05:39.193 "config": null 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "subsystem": "scheduler", 00:05:39.193 "config": [ 00:05:39.193 { 00:05:39.193 "method": "framework_set_scheduler", 00:05:39.193 "params": { 00:05:39.193 "name": "static" 00:05:39.193 } 00:05:39.193 } 00:05:39.193 ] 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "subsystem": "vhost_scsi", 00:05:39.193 "config": [] 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "subsystem": "vhost_blk", 00:05:39.193 "config": [] 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "subsystem": "ublk", 00:05:39.193 "config": [] 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "subsystem": "nbd", 00:05:39.193 "config": [] 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "subsystem": "nvmf", 00:05:39.193 "config": [ 00:05:39.193 { 00:05:39.193 "method": "nvmf_set_config", 00:05:39.193 "params": { 00:05:39.193 "discovery_filter": "match_any", 00:05:39.193 "admin_cmd_passthru": { 00:05:39.193 "identify_ctrlr": false 00:05:39.193 }, 00:05:39.193 "dhchap_digests": [ 00:05:39.193 "sha256", 00:05:39.193 "sha384", 00:05:39.193 "sha512" 00:05:39.193 ], 00:05:39.193 "dhchap_dhgroups": [ 00:05:39.193 "null", 00:05:39.193 "ffdhe2048", 00:05:39.193 "ffdhe3072", 00:05:39.193 "ffdhe4096", 00:05:39.193 "ffdhe6144", 00:05:39.193 "ffdhe8192" 00:05:39.193 ] 00:05:39.193 } 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "method": "nvmf_set_max_subsystems", 00:05:39.193 "params": { 00:05:39.193 "max_subsystems": 1024 00:05:39.193 } 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "method": "nvmf_set_crdt", 00:05:39.193 "params": { 00:05:39.193 "crdt1": 0, 00:05:39.193 "crdt2": 0, 00:05:39.193 "crdt3": 0 00:05:39.193 } 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "method": "nvmf_create_transport", 00:05:39.193 "params": { 00:05:39.193 "trtype": "TCP", 00:05:39.193 "max_queue_depth": 128, 00:05:39.193 "max_io_qpairs_per_ctrlr": 127, 00:05:39.193 "in_capsule_data_size": 4096, 00:05:39.193 "max_io_size": 131072, 00:05:39.193 "io_unit_size": 131072, 00:05:39.193 "max_aq_depth": 128, 00:05:39.193 "num_shared_buffers": 511, 00:05:39.193 "buf_cache_size": 4294967295, 00:05:39.193 "dif_insert_or_strip": false, 00:05:39.193 "zcopy": false, 00:05:39.193 "c2h_success": true, 00:05:39.193 "sock_priority": 0, 00:05:39.193 "abort_timeout_sec": 1, 00:05:39.193 "ack_timeout": 0, 00:05:39.193 "data_wr_pool_size": 0 00:05:39.193 } 00:05:39.193 } 00:05:39.193 ] 00:05:39.193 }, 00:05:39.193 { 00:05:39.193 "subsystem": "iscsi", 00:05:39.193 "config": [ 00:05:39.193 { 00:05:39.193 "method": "iscsi_set_options", 00:05:39.193 "params": { 00:05:39.193 "node_base": "iqn.2016-06.io.spdk", 00:05:39.193 "max_sessions": 128, 00:05:39.193 "max_connections_per_session": 2, 00:05:39.193 "max_queue_depth": 64, 00:05:39.193 "default_time2wait": 2, 00:05:39.193 "default_time2retain": 20, 00:05:39.193 "first_burst_length": 8192, 00:05:39.193 "immediate_data": true, 00:05:39.193 "allow_duplicated_isid": false, 00:05:39.193 "error_recovery_level": 0, 00:05:39.193 "nop_timeout": 60, 00:05:39.193 "nop_in_interval": 30, 00:05:39.193 "disable_chap": false, 00:05:39.193 "require_chap": false, 00:05:39.193 "mutual_chap": false, 00:05:39.193 "chap_group": 0, 00:05:39.193 "max_large_datain_per_connection": 64, 00:05:39.193 "max_r2t_per_connection": 4, 00:05:39.193 "pdu_pool_size": 36864, 00:05:39.193 "immediate_data_pool_size": 16384, 00:05:39.193 "data_out_pool_size": 2048 00:05:39.193 } 00:05:39.193 } 00:05:39.193 ] 00:05:39.193 } 00:05:39.193 ] 00:05:39.193 } 00:05:39.193 09:37:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:39.193 09:37:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3640910 00:05:39.193 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3640910 ']' 00:05:39.193 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3640910 00:05:39.193 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:39.193 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.193 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3640910 00:05:39.193 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.193 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.193 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3640910' 00:05:39.193 killing process with pid 3640910 00:05:39.193 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3640910 00:05:39.193 09:37:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3640910 00:05:39.454 09:37:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3641251 00:05:39.454 09:37:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:39.454 09:37:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:44.745 09:37:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3641251 00:05:44.745 09:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3641251 ']' 00:05:44.745 09:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3641251 00:05:44.745 09:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:44.746 09:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.746 09:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3641251 00:05:44.746 09:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.746 09:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.746 09:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3641251' 00:05:44.746 killing process with pid 3641251 00:05:44.746 09:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3641251 00:05:44.746 09:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3641251 00:05:44.746 09:37:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:44.746 09:37:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:44.746 00:05:44.746 real 0m6.539s 00:05:44.746 user 0m6.472s 00:05:44.746 sys 0m0.536s 00:05:44.746 09:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.746 09:37:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.746 ************************************ 00:05:44.746 END TEST skip_rpc_with_json 00:05:44.746 ************************************ 00:05:44.746 09:38:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:44.746 09:38:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.746 09:38:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.746 09:38:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.746 ************************************ 00:05:44.746 START TEST skip_rpc_with_delay 00:05:44.746 ************************************ 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.746 [2024-11-27 09:38:00.142777] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.746 00:05:44.746 real 0m0.088s 00:05:44.746 user 0m0.059s 00:05:44.746 sys 0m0.029s 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.746 09:38:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:44.746 ************************************ 00:05:44.746 END TEST skip_rpc_with_delay 00:05:44.746 ************************************ 00:05:44.746 09:38:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:44.746 09:38:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:44.746 09:38:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:44.746 09:38:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.746 09:38:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.746 09:38:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.006 ************************************ 00:05:45.006 START TEST exit_on_failed_rpc_init 00:05:45.006 ************************************ 00:05:45.006 09:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:45.006 09:38:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3642322 00:05:45.006 09:38:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3642322 00:05:45.006 09:38:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.006 09:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3642322 ']' 00:05:45.006 09:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.006 09:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.006 09:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.006 09:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.006 09:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.006 [2024-11-27 09:38:00.300001] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:05:45.006 [2024-11-27 09:38:00.300059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642322 ] 00:05:45.006 [2024-11-27 09:38:00.385662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.006 [2024-11-27 09:38:00.420139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.948 [2024-11-27 09:38:01.151384] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:05:45.948 [2024-11-27 09:38:01.151440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642558 ] 00:05:45.948 [2024-11-27 09:38:01.238771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.948 [2024-11-27 09:38:01.274557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.948 [2024-11-27 09:38:01.274616] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:45.948 [2024-11-27 09:38:01.274631] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:45.948 [2024-11-27 09:38:01.274641] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3642322 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3642322 ']' 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3642322 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3642322 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3642322' 00:05:45.948 killing process with pid 3642322 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3642322 00:05:45.948 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3642322 00:05:46.209 00:05:46.209 real 0m1.325s 00:05:46.209 user 0m1.559s 00:05:46.209 sys 0m0.377s 00:05:46.209 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.209 09:38:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.209 ************************************ 00:05:46.209 END TEST exit_on_failed_rpc_init 00:05:46.209 ************************************ 00:05:46.209 09:38:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:46.209 00:05:46.209 real 0m13.733s 00:05:46.209 user 0m13.337s 00:05:46.209 sys 0m1.556s 00:05:46.209 09:38:01 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.209 09:38:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.209 ************************************ 00:05:46.209 END TEST skip_rpc 00:05:46.209 ************************************ 00:05:46.209 09:38:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:46.209 09:38:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.209 09:38:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.209 09:38:01 -- common/autotest_common.sh@10 -- # set +x 00:05:46.470 ************************************ 00:05:46.470 START TEST rpc_client 00:05:46.470 ************************************ 00:05:46.470 09:38:01 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:46.470 * Looking for test storage... 00:05:46.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:46.470 09:38:01 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.470 09:38:01 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.470 09:38:01 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.470 09:38:01 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.470 09:38:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:46.470 09:38:01 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.470 09:38:01 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.470 --rc genhtml_branch_coverage=1 00:05:46.470 --rc genhtml_function_coverage=1 00:05:46.470 --rc genhtml_legend=1 00:05:46.470 --rc geninfo_all_blocks=1 00:05:46.470 --rc geninfo_unexecuted_blocks=1 00:05:46.470 00:05:46.470 ' 00:05:46.470 09:38:01 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.470 --rc genhtml_branch_coverage=1 00:05:46.470 --rc genhtml_function_coverage=1 00:05:46.470 --rc genhtml_legend=1 00:05:46.470 --rc geninfo_all_blocks=1 00:05:46.470 --rc geninfo_unexecuted_blocks=1 00:05:46.470 00:05:46.470 ' 00:05:46.470 09:38:01 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.470 --rc genhtml_branch_coverage=1 00:05:46.470 --rc genhtml_function_coverage=1 00:05:46.470 --rc genhtml_legend=1 00:05:46.470 --rc geninfo_all_blocks=1 00:05:46.470 --rc geninfo_unexecuted_blocks=1 00:05:46.470 00:05:46.470 ' 00:05:46.470 09:38:01 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.470 --rc genhtml_branch_coverage=1 00:05:46.470 --rc genhtml_function_coverage=1 00:05:46.470 --rc genhtml_legend=1 00:05:46.470 --rc geninfo_all_blocks=1 00:05:46.470 --rc geninfo_unexecuted_blocks=1 00:05:46.470 00:05:46.470 ' 00:05:46.470 09:38:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:46.470 OK 00:05:46.470 09:38:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:46.470 00:05:46.470 real 0m0.223s 00:05:46.470 user 0m0.139s 00:05:46.470 sys 0m0.096s 00:05:46.470 09:38:01 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.470 09:38:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:46.470 ************************************ 00:05:46.470 END TEST rpc_client 00:05:46.470 ************************************ 00:05:46.731 09:38:01 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:46.731 09:38:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.731 09:38:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.731 09:38:01 -- common/autotest_common.sh@10 -- # set +x 00:05:46.731 ************************************ 00:05:46.731 START TEST json_config 00:05:46.731 ************************************ 00:05:46.731 09:38:01 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:46.731 09:38:02 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.731 09:38:02 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.731 09:38:02 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.731 09:38:02 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.731 09:38:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.731 09:38:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.731 09:38:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.731 09:38:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.731 09:38:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.732 09:38:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.732 09:38:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.732 09:38:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.732 09:38:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.732 09:38:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.732 09:38:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.732 09:38:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:46.732 09:38:02 json_config -- scripts/common.sh@345 -- # : 1 00:05:46.732 09:38:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.732 09:38:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.732 09:38:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:46.732 09:38:02 json_config -- scripts/common.sh@353 -- # local d=1 00:05:46.732 09:38:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.732 09:38:02 json_config -- scripts/common.sh@355 -- # echo 1 00:05:46.732 09:38:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.732 09:38:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:46.732 09:38:02 json_config -- scripts/common.sh@353 -- # local d=2 00:05:46.732 09:38:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.732 09:38:02 json_config -- scripts/common.sh@355 -- # echo 2 00:05:46.732 09:38:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.732 09:38:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.732 09:38:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.732 09:38:02 json_config -- scripts/common.sh@368 -- # return 0 00:05:46.732 09:38:02 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.732 09:38:02 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.732 --rc genhtml_branch_coverage=1 00:05:46.732 --rc genhtml_function_coverage=1 00:05:46.732 --rc genhtml_legend=1 00:05:46.732 --rc geninfo_all_blocks=1 00:05:46.732 --rc geninfo_unexecuted_blocks=1 00:05:46.732 00:05:46.732 ' 00:05:46.732 09:38:02 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.732 --rc genhtml_branch_coverage=1 00:05:46.732 --rc genhtml_function_coverage=1 00:05:46.732 --rc genhtml_legend=1 00:05:46.732 --rc geninfo_all_blocks=1 00:05:46.732 --rc geninfo_unexecuted_blocks=1 00:05:46.732 00:05:46.732 ' 00:05:46.732 09:38:02 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.732 --rc genhtml_branch_coverage=1 00:05:46.732 --rc genhtml_function_coverage=1 00:05:46.732 --rc genhtml_legend=1 00:05:46.732 --rc geninfo_all_blocks=1 00:05:46.732 --rc geninfo_unexecuted_blocks=1 00:05:46.732 00:05:46.732 ' 00:05:46.732 09:38:02 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.732 --rc genhtml_branch_coverage=1 00:05:46.732 --rc genhtml_function_coverage=1 00:05:46.732 --rc genhtml_legend=1 00:05:46.732 --rc geninfo_all_blocks=1 00:05:46.732 --rc geninfo_unexecuted_blocks=1 00:05:46.732 00:05:46.732 ' 00:05:46.732 09:38:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.732 09:38:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.732 09:38:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.732 09:38:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.732 09:38:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.732 09:38:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.732 09:38:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.732 09:38:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.732 09:38:02 json_config -- paths/export.sh@5 -- # export PATH 00:05:46.732 09:38:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@51 -- # : 0 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.732 09:38:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.732 09:38:02 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:46.732 09:38:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:46.732 09:38:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:46.733 INFO: JSON configuration test init 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:46.733 09:38:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.733 09:38:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.733 09:38:02 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:46.733 09:38:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.733 09:38:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.995 09:38:02 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:46.995 09:38:02 json_config -- json_config/common.sh@9 -- # local app=target 00:05:46.995 09:38:02 json_config -- json_config/common.sh@10 -- # shift 00:05:46.995 09:38:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.995 09:38:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.995 09:38:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.995 09:38:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.995 09:38:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.995 09:38:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3642789 00:05:46.995 09:38:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.995 Waiting for target to run... 00:05:46.995 09:38:02 json_config -- json_config/common.sh@25 -- # waitforlisten 3642789 /var/tmp/spdk_tgt.sock 00:05:46.995 09:38:02 json_config -- common/autotest_common.sh@835 -- # '[' -z 3642789 ']' 00:05:46.995 09:38:02 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.995 09:38:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:46.995 09:38:02 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.995 09:38:02 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.995 09:38:02 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.995 09:38:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.995 [2024-11-27 09:38:02.265483] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:05:46.995 [2024-11-27 09:38:02.265557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642789 ] 00:05:47.256 [2024-11-27 09:38:02.700916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.517 [2024-11-27 09:38:02.730313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.778 09:38:03 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.778 09:38:03 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:47.778 09:38:03 json_config -- json_config/common.sh@26 -- # echo '' 00:05:47.778 00:05:47.778 09:38:03 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:47.778 09:38:03 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:47.778 09:38:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.778 09:38:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.778 09:38:03 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:47.778 09:38:03 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:47.778 09:38:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:47.778 09:38:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.778 09:38:03 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:47.778 09:38:03 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:47.778 09:38:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:48.350 09:38:03 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:48.350 09:38:03 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:48.350 09:38:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.350 09:38:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.350 09:38:03 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:48.350 09:38:03 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:48.350 09:38:03 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:48.350 09:38:03 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:48.350 09:38:03 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:48.350 09:38:03 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:48.350 09:38:03 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:48.350 09:38:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@54 -- # sort 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:48.612 09:38:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:48.612 09:38:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:48.612 09:38:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.612 09:38:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:48.612 09:38:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:48.612 09:38:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:48.612 MallocForNvmf0 00:05:48.874 09:38:04 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:48.874 09:38:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:48.874 MallocForNvmf1 00:05:48.874 09:38:04 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:48.874 09:38:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:49.135 [2024-11-27 09:38:04.401337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.135 09:38:04 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:49.135 09:38:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:49.135 09:38:04 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:49.135 09:38:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:49.395 09:38:04 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:49.395 09:38:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:49.656 09:38:04 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:49.656 09:38:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:49.656 [2024-11-27 09:38:05.039305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:49.656 09:38:05 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:49.656 09:38:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:49.656 09:38:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.656 09:38:05 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:49.656 09:38:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:49.656 09:38:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.916 09:38:05 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:49.916 09:38:05 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:49.916 09:38:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:49.916 MallocBdevForConfigChangeCheck 00:05:49.916 09:38:05 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:49.916 09:38:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:49.916 09:38:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.916 09:38:05 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:49.916 09:38:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.487 09:38:05 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:50.487 INFO: shutting down applications... 00:05:50.487 09:38:05 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:50.487 09:38:05 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:50.487 09:38:05 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:50.487 09:38:05 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:50.749 Calling clear_iscsi_subsystem 00:05:50.749 Calling clear_nvmf_subsystem 00:05:50.749 Calling clear_nbd_subsystem 00:05:50.749 Calling clear_ublk_subsystem 00:05:50.749 Calling clear_vhost_blk_subsystem 00:05:50.749 Calling clear_vhost_scsi_subsystem 00:05:50.749 Calling clear_bdev_subsystem 00:05:50.749 09:38:06 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:50.749 09:38:06 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:50.749 09:38:06 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:50.749 09:38:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.749 09:38:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:50.749 09:38:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:51.010 09:38:06 json_config -- json_config/json_config.sh@352 -- # break 00:05:51.010 09:38:06 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:51.010 09:38:06 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:51.010 09:38:06 json_config -- json_config/common.sh@31 -- # local app=target 00:05:51.010 09:38:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:51.010 09:38:06 json_config -- json_config/common.sh@35 -- # [[ -n 3642789 ]] 00:05:51.010 09:38:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3642789 00:05:51.010 09:38:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:51.010 09:38:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.010 09:38:06 json_config -- json_config/common.sh@41 -- # kill -0 3642789 00:05:51.010 09:38:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.583 09:38:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.583 09:38:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.583 09:38:06 json_config -- json_config/common.sh@41 -- # kill -0 3642789 00:05:51.583 09:38:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:51.583 09:38:06 json_config -- json_config/common.sh@43 -- # break 00:05:51.583 09:38:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:51.583 09:38:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:51.583 SPDK target shutdown done 00:05:51.583 09:38:06 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:51.583 INFO: relaunching applications... 00:05:51.583 09:38:06 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.583 09:38:06 json_config -- json_config/common.sh@9 -- # local app=target 00:05:51.583 09:38:06 json_config -- json_config/common.sh@10 -- # shift 00:05:51.583 09:38:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:51.583 09:38:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:51.583 09:38:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:51.583 09:38:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.583 09:38:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.583 09:38:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3643926 00:05:51.583 09:38:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:51.583 Waiting for target to run... 00:05:51.583 09:38:06 json_config -- json_config/common.sh@25 -- # waitforlisten 3643926 /var/tmp/spdk_tgt.sock 00:05:51.583 09:38:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.583 09:38:06 json_config -- common/autotest_common.sh@835 -- # '[' -z 3643926 ']' 00:05:51.583 09:38:06 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.583 09:38:06 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.583 09:38:06 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.583 09:38:06 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.583 09:38:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.583 [2024-11-27 09:38:06.990891] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:05:51.583 [2024-11-27 09:38:06.990966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643926 ] 00:05:51.844 [2024-11-27 09:38:07.276081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.844 [2024-11-27 09:38:07.305043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.416 [2024-11-27 09:38:07.803843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.416 [2024-11-27 09:38:07.836208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:52.416 09:38:07 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.416 09:38:07 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:52.416 09:38:07 json_config -- json_config/common.sh@26 -- # echo '' 00:05:52.416 00:05:52.416 09:38:07 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:52.416 09:38:07 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:52.416 INFO: Checking if target configuration is the same... 00:05:52.416 09:38:07 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:52.416 09:38:07 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:52.416 09:38:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.677 + '[' 2 -ne 2 ']' 00:05:52.677 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:52.677 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:52.677 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:52.677 +++ basename /dev/fd/62 00:05:52.677 ++ mktemp /tmp/62.XXX 00:05:52.677 + tmp_file_1=/tmp/62.qYj 00:05:52.677 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:52.677 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:52.677 + tmp_file_2=/tmp/spdk_tgt_config.json.KIg 00:05:52.677 + ret=0 00:05:52.677 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:52.937 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:52.937 + diff -u /tmp/62.qYj /tmp/spdk_tgt_config.json.KIg 00:05:52.937 + echo 'INFO: JSON config files are the same' 00:05:52.937 INFO: JSON config files are the same 00:05:52.937 + rm /tmp/62.qYj /tmp/spdk_tgt_config.json.KIg 00:05:52.937 + exit 0 00:05:52.937 09:38:08 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:52.937 09:38:08 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:52.937 INFO: changing configuration and checking if this can be detected... 00:05:52.937 09:38:08 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:52.937 09:38:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:53.198 09:38:08 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:53.198 09:38:08 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:53.198 09:38:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.198 + '[' 2 -ne 2 ']' 00:05:53.198 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:53.198 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:53.198 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:53.198 +++ basename /dev/fd/62 00:05:53.198 ++ mktemp /tmp/62.XXX 00:05:53.198 + tmp_file_1=/tmp/62.J4Y 00:05:53.198 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:53.198 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:53.198 + tmp_file_2=/tmp/spdk_tgt_config.json.xDo 00:05:53.198 + ret=0 00:05:53.198 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:53.460 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:53.460 + diff -u /tmp/62.J4Y /tmp/spdk_tgt_config.json.xDo 00:05:53.460 + ret=1 00:05:53.460 + echo '=== Start of file: /tmp/62.J4Y ===' 00:05:53.460 + cat /tmp/62.J4Y 00:05:53.460 + echo '=== End of file: /tmp/62.J4Y ===' 00:05:53.460 + echo '' 00:05:53.460 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xDo ===' 00:05:53.460 + cat /tmp/spdk_tgt_config.json.xDo 00:05:53.460 + echo '=== End of file: /tmp/spdk_tgt_config.json.xDo ===' 00:05:53.460 + echo '' 00:05:53.460 + rm /tmp/62.J4Y /tmp/spdk_tgt_config.json.xDo 00:05:53.460 + exit 1 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:53.460 INFO: configuration change detected. 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:53.460 09:38:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.460 09:38:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@324 -- # [[ -n 3643926 ]] 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:53.460 09:38:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.460 09:38:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:53.460 09:38:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:53.460 09:38:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.460 09:38:08 json_config -- json_config/json_config.sh@330 -- # killprocess 3643926 00:05:53.460 09:38:08 json_config -- common/autotest_common.sh@954 -- # '[' -z 3643926 ']' 00:05:53.460 09:38:08 json_config -- common/autotest_common.sh@958 -- # kill -0 3643926 00:05:53.460 09:38:08 json_config -- common/autotest_common.sh@959 -- # uname 00:05:53.460 09:38:08 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.460 09:38:08 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3643926 00:05:53.722 09:38:08 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.722 09:38:08 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.722 09:38:08 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3643926' 00:05:53.722 killing process with pid 3643926 00:05:53.722 09:38:08 json_config -- common/autotest_common.sh@973 -- # kill 3643926 00:05:53.722 09:38:08 json_config -- common/autotest_common.sh@978 -- # wait 3643926 00:05:53.982 09:38:09 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:53.983 09:38:09 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:53.983 09:38:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:53.983 09:38:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.983 09:38:09 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:53.983 09:38:09 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:53.983 INFO: Success 00:05:53.983 00:05:53.983 real 0m7.273s 00:05:53.983 user 0m8.676s 00:05:53.983 sys 0m2.034s 00:05:53.983 09:38:09 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.983 09:38:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.983 ************************************ 00:05:53.983 END TEST json_config 00:05:53.983 ************************************ 00:05:53.983 09:38:09 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:53.983 09:38:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.983 09:38:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.983 09:38:09 -- common/autotest_common.sh@10 -- # set +x 00:05:53.983 ************************************ 00:05:53.983 START TEST json_config_extra_key 00:05:53.983 ************************************ 00:05:53.983 09:38:09 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:53.983 09:38:09 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.983 09:38:09 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.983 09:38:09 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.245 09:38:09 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:54.245 09:38:09 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.245 09:38:09 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.245 --rc genhtml_branch_coverage=1 00:05:54.245 --rc genhtml_function_coverage=1 00:05:54.245 --rc genhtml_legend=1 00:05:54.245 --rc geninfo_all_blocks=1 00:05:54.245 --rc geninfo_unexecuted_blocks=1 00:05:54.245 00:05:54.245 ' 00:05:54.245 09:38:09 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.245 --rc genhtml_branch_coverage=1 00:05:54.245 --rc genhtml_function_coverage=1 00:05:54.245 --rc genhtml_legend=1 00:05:54.245 --rc geninfo_all_blocks=1 00:05:54.245 --rc geninfo_unexecuted_blocks=1 00:05:54.245 00:05:54.245 ' 00:05:54.245 09:38:09 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.245 --rc genhtml_branch_coverage=1 00:05:54.245 --rc genhtml_function_coverage=1 00:05:54.245 --rc genhtml_legend=1 00:05:54.245 --rc geninfo_all_blocks=1 00:05:54.245 --rc geninfo_unexecuted_blocks=1 00:05:54.245 00:05:54.245 ' 00:05:54.245 09:38:09 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.245 --rc genhtml_branch_coverage=1 00:05:54.245 --rc genhtml_function_coverage=1 00:05:54.245 --rc genhtml_legend=1 00:05:54.245 --rc geninfo_all_blocks=1 00:05:54.245 --rc geninfo_unexecuted_blocks=1 00:05:54.245 00:05:54.245 ' 00:05:54.245 09:38:09 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.245 09:38:09 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.245 09:38:09 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.245 09:38:09 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.245 09:38:09 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.245 09:38:09 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:54.245 09:38:09 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:54.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:54.245 09:38:09 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:54.245 09:38:09 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:54.245 09:38:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:54.245 09:38:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:54.245 09:38:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:54.245 09:38:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:54.245 09:38:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:54.245 09:38:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:54.245 09:38:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:54.245 09:38:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:54.245 09:38:09 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:54.245 09:38:09 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:54.245 INFO: launching applications... 00:05:54.245 09:38:09 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:54.246 09:38:09 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:54.246 09:38:09 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:54.246 09:38:09 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:54.246 09:38:09 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:54.246 09:38:09 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:54.246 09:38:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.246 09:38:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.246 09:38:09 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3644618 00:05:54.246 09:38:09 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:54.246 Waiting for target to run... 00:05:54.246 09:38:09 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3644618 /var/tmp/spdk_tgt.sock 00:05:54.246 09:38:09 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3644618 ']' 00:05:54.246 09:38:09 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.246 09:38:09 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:54.246 09:38:09 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.246 09:38:09 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.246 09:38:09 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.246 09:38:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:54.246 [2024-11-27 09:38:09.602483] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:05:54.246 [2024-11-27 09:38:09.602559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644618 ] 00:05:54.507 [2024-11-27 09:38:09.932346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.507 [2024-11-27 09:38:09.961945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.080 09:38:10 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.080 09:38:10 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:55.080 09:38:10 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:55.080 00:05:55.080 09:38:10 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:55.080 INFO: shutting down applications... 00:05:55.080 09:38:10 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:55.080 09:38:10 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:55.080 09:38:10 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:55.080 09:38:10 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3644618 ]] 00:05:55.080 09:38:10 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3644618 00:05:55.080 09:38:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:55.080 09:38:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:55.080 09:38:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3644618 00:05:55.080 09:38:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:55.653 09:38:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:55.653 09:38:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:55.653 09:38:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3644618 00:05:55.653 09:38:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:55.653 09:38:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:55.653 09:38:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:55.653 09:38:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:55.653 SPDK target shutdown done 00:05:55.653 09:38:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:55.653 Success 00:05:55.653 00:05:55.653 real 0m1.574s 00:05:55.653 user 0m1.129s 00:05:55.653 sys 0m0.471s 00:05:55.653 09:38:10 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.653 09:38:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:55.653 ************************************ 00:05:55.653 END TEST json_config_extra_key 00:05:55.653 ************************************ 00:05:55.653 09:38:10 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:55.653 09:38:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.653 09:38:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.653 09:38:10 -- common/autotest_common.sh@10 -- # set +x 00:05:55.653 ************************************ 00:05:55.653 START TEST alias_rpc 00:05:55.653 ************************************ 00:05:55.653 09:38:10 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:55.653 * Looking for test storage... 00:05:55.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:55.653 09:38:11 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.653 09:38:11 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.653 09:38:11 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.915 09:38:11 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.915 09:38:11 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:55.915 09:38:11 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.915 09:38:11 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.915 --rc genhtml_branch_coverage=1 00:05:55.915 --rc genhtml_function_coverage=1 00:05:55.915 --rc genhtml_legend=1 00:05:55.915 --rc geninfo_all_blocks=1 00:05:55.915 --rc geninfo_unexecuted_blocks=1 00:05:55.915 00:05:55.915 ' 00:05:55.915 09:38:11 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.915 --rc genhtml_branch_coverage=1 00:05:55.915 --rc genhtml_function_coverage=1 00:05:55.915 --rc genhtml_legend=1 00:05:55.915 --rc geninfo_all_blocks=1 00:05:55.915 --rc geninfo_unexecuted_blocks=1 00:05:55.915 00:05:55.915 ' 00:05:55.915 09:38:11 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.915 --rc genhtml_branch_coverage=1 00:05:55.915 --rc genhtml_function_coverage=1 00:05:55.915 --rc genhtml_legend=1 00:05:55.915 --rc geninfo_all_blocks=1 00:05:55.915 --rc geninfo_unexecuted_blocks=1 00:05:55.915 00:05:55.915 ' 00:05:55.915 09:38:11 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.915 --rc genhtml_branch_coverage=1 00:05:55.915 --rc genhtml_function_coverage=1 00:05:55.915 --rc genhtml_legend=1 00:05:55.915 --rc geninfo_all_blocks=1 00:05:55.915 --rc geninfo_unexecuted_blocks=1 00:05:55.915 00:05:55.915 ' 00:05:55.915 09:38:11 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.915 09:38:11 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3644969 00:05:55.915 09:38:11 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3644969 00:05:55.915 09:38:11 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.915 09:38:11 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3644969 ']' 00:05:55.915 09:38:11 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.915 09:38:11 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.915 09:38:11 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.915 09:38:11 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.915 09:38:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.916 [2024-11-27 09:38:11.252306] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:05:55.916 [2024-11-27 09:38:11.252381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644969 ] 00:05:55.916 [2024-11-27 09:38:11.342238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.177 [2024-11-27 09:38:11.383773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.747 09:38:12 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.747 09:38:12 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:56.748 09:38:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:57.008 09:38:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3644969 00:05:57.008 09:38:12 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3644969 ']' 00:05:57.008 09:38:12 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3644969 00:05:57.008 09:38:12 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:57.008 09:38:12 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.009 09:38:12 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3644969 00:05:57.009 09:38:12 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.009 09:38:12 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.009 09:38:12 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3644969' 00:05:57.009 killing process with pid 3644969 00:05:57.009 09:38:12 alias_rpc -- common/autotest_common.sh@973 -- # kill 3644969 00:05:57.009 09:38:12 alias_rpc -- common/autotest_common.sh@978 -- # wait 3644969 00:05:57.270 00:05:57.270 real 0m1.515s 00:05:57.270 user 0m1.668s 00:05:57.270 sys 0m0.438s 00:05:57.270 09:38:12 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.270 09:38:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.270 ************************************ 00:05:57.270 END TEST alias_rpc 00:05:57.270 ************************************ 00:05:57.270 09:38:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:57.270 09:38:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:57.270 09:38:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.270 09:38:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.270 09:38:12 -- common/autotest_common.sh@10 -- # set +x 00:05:57.270 ************************************ 00:05:57.270 START TEST spdkcli_tcp 00:05:57.270 ************************************ 00:05:57.270 09:38:12 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:57.270 * Looking for test storage... 00:05:57.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:57.270 09:38:12 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.270 09:38:12 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.270 09:38:12 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.531 09:38:12 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.531 09:38:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:57.531 09:38:12 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.531 09:38:12 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.531 --rc genhtml_branch_coverage=1 00:05:57.532 --rc genhtml_function_coverage=1 00:05:57.532 --rc genhtml_legend=1 00:05:57.532 --rc geninfo_all_blocks=1 00:05:57.532 --rc geninfo_unexecuted_blocks=1 00:05:57.532 00:05:57.532 ' 00:05:57.532 09:38:12 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.532 --rc genhtml_branch_coverage=1 00:05:57.532 --rc genhtml_function_coverage=1 00:05:57.532 --rc genhtml_legend=1 00:05:57.532 --rc geninfo_all_blocks=1 00:05:57.532 --rc geninfo_unexecuted_blocks=1 00:05:57.532 00:05:57.532 ' 00:05:57.532 09:38:12 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.532 --rc genhtml_branch_coverage=1 00:05:57.532 --rc genhtml_function_coverage=1 00:05:57.532 --rc genhtml_legend=1 00:05:57.532 --rc geninfo_all_blocks=1 00:05:57.532 --rc geninfo_unexecuted_blocks=1 00:05:57.532 00:05:57.532 ' 00:05:57.532 09:38:12 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.532 --rc genhtml_branch_coverage=1 00:05:57.532 --rc genhtml_function_coverage=1 00:05:57.532 --rc genhtml_legend=1 00:05:57.532 --rc geninfo_all_blocks=1 00:05:57.532 --rc geninfo_unexecuted_blocks=1 00:05:57.532 00:05:57.532 ' 00:05:57.532 09:38:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:57.532 09:38:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:57.532 09:38:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:57.532 09:38:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:57.532 09:38:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:57.532 09:38:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:57.532 09:38:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:57.532 09:38:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.532 09:38:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.532 09:38:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3645321 00:05:57.532 09:38:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3645321 00:05:57.532 09:38:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:57.532 09:38:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3645321 ']' 00:05:57.532 09:38:12 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.532 09:38:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.532 09:38:12 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.532 09:38:12 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.532 09:38:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.532 [2024-11-27 09:38:12.863461] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:05:57.532 [2024-11-27 09:38:12.863534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645321 ] 00:05:57.532 [2024-11-27 09:38:12.950668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.532 [2024-11-27 09:38:12.986896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.532 [2024-11-27 09:38:12.986898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.474 09:38:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.474 09:38:13 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:58.474 09:38:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3645525 00:05:58.474 09:38:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:58.474 09:38:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:58.474 [ 00:05:58.474 "bdev_malloc_delete", 00:05:58.474 "bdev_malloc_create", 00:05:58.474 "bdev_null_resize", 00:05:58.474 "bdev_null_delete", 00:05:58.474 "bdev_null_create", 00:05:58.474 "bdev_nvme_cuse_unregister", 00:05:58.474 "bdev_nvme_cuse_register", 00:05:58.474 "bdev_opal_new_user", 00:05:58.474 "bdev_opal_set_lock_state", 00:05:58.474 "bdev_opal_delete", 00:05:58.474 "bdev_opal_get_info", 00:05:58.474 "bdev_opal_create", 00:05:58.474 "bdev_nvme_opal_revert", 00:05:58.474 "bdev_nvme_opal_init", 00:05:58.474 "bdev_nvme_send_cmd", 00:05:58.474 "bdev_nvme_set_keys", 00:05:58.474 "bdev_nvme_get_path_iostat", 00:05:58.474 "bdev_nvme_get_mdns_discovery_info", 00:05:58.474 "bdev_nvme_stop_mdns_discovery", 00:05:58.474 "bdev_nvme_start_mdns_discovery", 00:05:58.474 "bdev_nvme_set_multipath_policy", 00:05:58.474 "bdev_nvme_set_preferred_path", 00:05:58.474 "bdev_nvme_get_io_paths", 00:05:58.474 "bdev_nvme_remove_error_injection", 00:05:58.474 "bdev_nvme_add_error_injection", 00:05:58.474 "bdev_nvme_get_discovery_info", 00:05:58.474 "bdev_nvme_stop_discovery", 00:05:58.474 "bdev_nvme_start_discovery", 00:05:58.474 "bdev_nvme_get_controller_health_info", 00:05:58.474 "bdev_nvme_disable_controller", 00:05:58.474 "bdev_nvme_enable_controller", 00:05:58.474 "bdev_nvme_reset_controller", 00:05:58.474 "bdev_nvme_get_transport_statistics", 00:05:58.474 "bdev_nvme_apply_firmware", 00:05:58.474 "bdev_nvme_detach_controller", 00:05:58.474 "bdev_nvme_get_controllers", 00:05:58.474 "bdev_nvme_attach_controller", 00:05:58.474 "bdev_nvme_set_hotplug", 00:05:58.474 "bdev_nvme_set_options", 00:05:58.474 "bdev_passthru_delete", 00:05:58.474 "bdev_passthru_create", 00:05:58.474 "bdev_lvol_set_parent_bdev", 00:05:58.474 "bdev_lvol_set_parent", 00:05:58.474 "bdev_lvol_check_shallow_copy", 00:05:58.474 "bdev_lvol_start_shallow_copy", 00:05:58.474 "bdev_lvol_grow_lvstore", 00:05:58.474 "bdev_lvol_get_lvols", 00:05:58.474 "bdev_lvol_get_lvstores", 00:05:58.474 "bdev_lvol_delete", 00:05:58.474 "bdev_lvol_set_read_only", 00:05:58.474 "bdev_lvol_resize", 00:05:58.474 "bdev_lvol_decouple_parent", 00:05:58.474 "bdev_lvol_inflate", 00:05:58.474 "bdev_lvol_rename", 00:05:58.474 "bdev_lvol_clone_bdev", 00:05:58.474 "bdev_lvol_clone", 00:05:58.474 "bdev_lvol_snapshot", 00:05:58.474 "bdev_lvol_create", 00:05:58.474 "bdev_lvol_delete_lvstore", 00:05:58.474 "bdev_lvol_rename_lvstore", 00:05:58.474 "bdev_lvol_create_lvstore", 00:05:58.474 "bdev_raid_set_options", 00:05:58.474 "bdev_raid_remove_base_bdev", 00:05:58.474 "bdev_raid_add_base_bdev", 00:05:58.474 "bdev_raid_delete", 00:05:58.474 "bdev_raid_create", 00:05:58.474 "bdev_raid_get_bdevs", 00:05:58.474 "bdev_error_inject_error", 00:05:58.474 "bdev_error_delete", 00:05:58.474 "bdev_error_create", 00:05:58.474 "bdev_split_delete", 00:05:58.474 "bdev_split_create", 00:05:58.474 "bdev_delay_delete", 00:05:58.474 "bdev_delay_create", 00:05:58.474 "bdev_delay_update_latency", 00:05:58.474 "bdev_zone_block_delete", 00:05:58.474 "bdev_zone_block_create", 00:05:58.474 "blobfs_create", 00:05:58.474 "blobfs_detect", 00:05:58.474 "blobfs_set_cache_size", 00:05:58.474 "bdev_aio_delete", 00:05:58.474 "bdev_aio_rescan", 00:05:58.474 "bdev_aio_create", 00:05:58.474 "bdev_ftl_set_property", 00:05:58.474 "bdev_ftl_get_properties", 00:05:58.474 "bdev_ftl_get_stats", 00:05:58.474 "bdev_ftl_unmap", 00:05:58.474 "bdev_ftl_unload", 00:05:58.474 "bdev_ftl_delete", 00:05:58.474 "bdev_ftl_load", 00:05:58.474 "bdev_ftl_create", 00:05:58.474 "bdev_virtio_attach_controller", 00:05:58.474 "bdev_virtio_scsi_get_devices", 00:05:58.474 "bdev_virtio_detach_controller", 00:05:58.474 "bdev_virtio_blk_set_hotplug", 00:05:58.474 "bdev_iscsi_delete", 00:05:58.474 "bdev_iscsi_create", 00:05:58.474 "bdev_iscsi_set_options", 00:05:58.474 "accel_error_inject_error", 00:05:58.474 "ioat_scan_accel_module", 00:05:58.475 "dsa_scan_accel_module", 00:05:58.475 "iaa_scan_accel_module", 00:05:58.475 "vfu_virtio_create_fs_endpoint", 00:05:58.475 "vfu_virtio_create_scsi_endpoint", 00:05:58.475 "vfu_virtio_scsi_remove_target", 00:05:58.475 "vfu_virtio_scsi_add_target", 00:05:58.475 "vfu_virtio_create_blk_endpoint", 00:05:58.475 "vfu_virtio_delete_endpoint", 00:05:58.475 "keyring_file_remove_key", 00:05:58.475 "keyring_file_add_key", 00:05:58.475 "keyring_linux_set_options", 00:05:58.475 "fsdev_aio_delete", 00:05:58.475 "fsdev_aio_create", 00:05:58.475 "iscsi_get_histogram", 00:05:58.475 "iscsi_enable_histogram", 00:05:58.475 "iscsi_set_options", 00:05:58.475 "iscsi_get_auth_groups", 00:05:58.475 "iscsi_auth_group_remove_secret", 00:05:58.475 "iscsi_auth_group_add_secret", 00:05:58.475 "iscsi_delete_auth_group", 00:05:58.475 "iscsi_create_auth_group", 00:05:58.475 "iscsi_set_discovery_auth", 00:05:58.475 "iscsi_get_options", 00:05:58.475 "iscsi_target_node_request_logout", 00:05:58.475 "iscsi_target_node_set_redirect", 00:05:58.475 "iscsi_target_node_set_auth", 00:05:58.475 "iscsi_target_node_add_lun", 00:05:58.475 "iscsi_get_stats", 00:05:58.475 "iscsi_get_connections", 00:05:58.475 "iscsi_portal_group_set_auth", 00:05:58.475 "iscsi_start_portal_group", 00:05:58.475 "iscsi_delete_portal_group", 00:05:58.475 "iscsi_create_portal_group", 00:05:58.475 "iscsi_get_portal_groups", 00:05:58.475 "iscsi_delete_target_node", 00:05:58.475 "iscsi_target_node_remove_pg_ig_maps", 00:05:58.475 "iscsi_target_node_add_pg_ig_maps", 00:05:58.475 "iscsi_create_target_node", 00:05:58.475 "iscsi_get_target_nodes", 00:05:58.475 "iscsi_delete_initiator_group", 00:05:58.475 "iscsi_initiator_group_remove_initiators", 00:05:58.475 "iscsi_initiator_group_add_initiators", 00:05:58.475 "iscsi_create_initiator_group", 00:05:58.475 "iscsi_get_initiator_groups", 00:05:58.475 "nvmf_set_crdt", 00:05:58.475 "nvmf_set_config", 00:05:58.475 "nvmf_set_max_subsystems", 00:05:58.475 "nvmf_stop_mdns_prr", 00:05:58.475 "nvmf_publish_mdns_prr", 00:05:58.475 "nvmf_subsystem_get_listeners", 00:05:58.475 "nvmf_subsystem_get_qpairs", 00:05:58.475 "nvmf_subsystem_get_controllers", 00:05:58.475 "nvmf_get_stats", 00:05:58.475 "nvmf_get_transports", 00:05:58.475 "nvmf_create_transport", 00:05:58.475 "nvmf_get_targets", 00:05:58.475 "nvmf_delete_target", 00:05:58.475 "nvmf_create_target", 00:05:58.475 "nvmf_subsystem_allow_any_host", 00:05:58.475 "nvmf_subsystem_set_keys", 00:05:58.475 "nvmf_subsystem_remove_host", 00:05:58.475 "nvmf_subsystem_add_host", 00:05:58.475 "nvmf_ns_remove_host", 00:05:58.475 "nvmf_ns_add_host", 00:05:58.475 "nvmf_subsystem_remove_ns", 00:05:58.475 "nvmf_subsystem_set_ns_ana_group", 00:05:58.475 "nvmf_subsystem_add_ns", 00:05:58.475 "nvmf_subsystem_listener_set_ana_state", 00:05:58.475 "nvmf_discovery_get_referrals", 00:05:58.475 "nvmf_discovery_remove_referral", 00:05:58.475 "nvmf_discovery_add_referral", 00:05:58.475 "nvmf_subsystem_remove_listener", 00:05:58.475 "nvmf_subsystem_add_listener", 00:05:58.475 "nvmf_delete_subsystem", 00:05:58.475 "nvmf_create_subsystem", 00:05:58.475 "nvmf_get_subsystems", 00:05:58.475 "env_dpdk_get_mem_stats", 00:05:58.475 "nbd_get_disks", 00:05:58.475 "nbd_stop_disk", 00:05:58.475 "nbd_start_disk", 00:05:58.475 "ublk_recover_disk", 00:05:58.475 "ublk_get_disks", 00:05:58.475 "ublk_stop_disk", 00:05:58.475 "ublk_start_disk", 00:05:58.475 "ublk_destroy_target", 00:05:58.475 "ublk_create_target", 00:05:58.475 "virtio_blk_create_transport", 00:05:58.475 "virtio_blk_get_transports", 00:05:58.475 "vhost_controller_set_coalescing", 00:05:58.475 "vhost_get_controllers", 00:05:58.475 "vhost_delete_controller", 00:05:58.475 "vhost_create_blk_controller", 00:05:58.475 "vhost_scsi_controller_remove_target", 00:05:58.475 "vhost_scsi_controller_add_target", 00:05:58.475 "vhost_start_scsi_controller", 00:05:58.475 "vhost_create_scsi_controller", 00:05:58.475 "thread_set_cpumask", 00:05:58.475 "scheduler_set_options", 00:05:58.475 "framework_get_governor", 00:05:58.475 "framework_get_scheduler", 00:05:58.475 "framework_set_scheduler", 00:05:58.475 "framework_get_reactors", 00:05:58.475 "thread_get_io_channels", 00:05:58.475 "thread_get_pollers", 00:05:58.475 "thread_get_stats", 00:05:58.475 "framework_monitor_context_switch", 00:05:58.475 "spdk_kill_instance", 00:05:58.475 "log_enable_timestamps", 00:05:58.475 "log_get_flags", 00:05:58.475 "log_clear_flag", 00:05:58.475 "log_set_flag", 00:05:58.475 "log_get_level", 00:05:58.475 "log_set_level", 00:05:58.475 "log_get_print_level", 00:05:58.475 "log_set_print_level", 00:05:58.475 "framework_enable_cpumask_locks", 00:05:58.475 "framework_disable_cpumask_locks", 00:05:58.475 "framework_wait_init", 00:05:58.475 "framework_start_init", 00:05:58.475 "scsi_get_devices", 00:05:58.475 "bdev_get_histogram", 00:05:58.475 "bdev_enable_histogram", 00:05:58.475 "bdev_set_qos_limit", 00:05:58.475 "bdev_set_qd_sampling_period", 00:05:58.475 "bdev_get_bdevs", 00:05:58.475 "bdev_reset_iostat", 00:05:58.475 "bdev_get_iostat", 00:05:58.475 "bdev_examine", 00:05:58.475 "bdev_wait_for_examine", 00:05:58.475 "bdev_set_options", 00:05:58.475 "accel_get_stats", 00:05:58.475 "accel_set_options", 00:05:58.475 "accel_set_driver", 00:05:58.475 "accel_crypto_key_destroy", 00:05:58.475 "accel_crypto_keys_get", 00:05:58.475 "accel_crypto_key_create", 00:05:58.475 "accel_assign_opc", 00:05:58.475 "accel_get_module_info", 00:05:58.475 "accel_get_opc_assignments", 00:05:58.475 "vmd_rescan", 00:05:58.475 "vmd_remove_device", 00:05:58.475 "vmd_enable", 00:05:58.475 "sock_get_default_impl", 00:05:58.475 "sock_set_default_impl", 00:05:58.475 "sock_impl_set_options", 00:05:58.475 "sock_impl_get_options", 00:05:58.475 "iobuf_get_stats", 00:05:58.475 "iobuf_set_options", 00:05:58.475 "keyring_get_keys", 00:05:58.475 "vfu_tgt_set_base_path", 00:05:58.475 "framework_get_pci_devices", 00:05:58.475 "framework_get_config", 00:05:58.475 "framework_get_subsystems", 00:05:58.475 "fsdev_set_opts", 00:05:58.475 "fsdev_get_opts", 00:05:58.475 "trace_get_info", 00:05:58.475 "trace_get_tpoint_group_mask", 00:05:58.475 "trace_disable_tpoint_group", 00:05:58.475 "trace_enable_tpoint_group", 00:05:58.475 "trace_clear_tpoint_mask", 00:05:58.475 "trace_set_tpoint_mask", 00:05:58.475 "notify_get_notifications", 00:05:58.475 "notify_get_types", 00:05:58.475 "spdk_get_version", 00:05:58.475 "rpc_get_methods" 00:05:58.475 ] 00:05:58.475 09:38:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:58.475 09:38:13 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:58.475 09:38:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.475 09:38:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:58.475 09:38:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3645321 00:05:58.475 09:38:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3645321 ']' 00:05:58.475 09:38:13 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3645321 00:05:58.475 09:38:13 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:58.475 09:38:13 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.475 09:38:13 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3645321 00:05:58.475 09:38:13 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.475 09:38:13 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.475 09:38:13 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3645321' 00:05:58.475 killing process with pid 3645321 00:05:58.475 09:38:13 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3645321 00:05:58.475 09:38:13 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3645321 00:05:58.737 00:05:58.737 real 0m1.507s 00:05:58.737 user 0m2.719s 00:05:58.737 sys 0m0.448s 00:05:58.737 09:38:14 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.737 09:38:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.737 ************************************ 00:05:58.737 END TEST spdkcli_tcp 00:05:58.737 ************************************ 00:05:58.737 09:38:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.737 09:38:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.737 09:38:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.737 09:38:14 -- common/autotest_common.sh@10 -- # set +x 00:05:58.737 ************************************ 00:05:58.737 START TEST dpdk_mem_utility 00:05:58.737 ************************************ 00:05:58.737 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:59.000 * Looking for test storage... 00:05:59.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:59.000 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.000 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.000 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.000 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.000 09:38:14 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:59.001 09:38:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:59.001 09:38:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.001 09:38:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:59.001 09:38:14 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.001 09:38:14 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:59.001 09:38:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:59.001 09:38:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.001 09:38:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:59.001 09:38:14 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.001 09:38:14 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.001 09:38:14 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.001 09:38:14 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:59.001 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.001 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.001 --rc genhtml_branch_coverage=1 00:05:59.001 --rc genhtml_function_coverage=1 00:05:59.001 --rc genhtml_legend=1 00:05:59.001 --rc geninfo_all_blocks=1 00:05:59.001 --rc geninfo_unexecuted_blocks=1 00:05:59.001 00:05:59.001 ' 00:05:59.001 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.001 --rc genhtml_branch_coverage=1 00:05:59.001 --rc genhtml_function_coverage=1 00:05:59.001 --rc genhtml_legend=1 00:05:59.001 --rc geninfo_all_blocks=1 00:05:59.001 --rc geninfo_unexecuted_blocks=1 00:05:59.001 00:05:59.001 ' 00:05:59.001 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.001 --rc genhtml_branch_coverage=1 00:05:59.001 --rc genhtml_function_coverage=1 00:05:59.001 --rc genhtml_legend=1 00:05:59.001 --rc geninfo_all_blocks=1 00:05:59.001 --rc geninfo_unexecuted_blocks=1 00:05:59.001 00:05:59.001 ' 00:05:59.001 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.001 --rc genhtml_branch_coverage=1 00:05:59.001 --rc genhtml_function_coverage=1 00:05:59.001 --rc genhtml_legend=1 00:05:59.001 --rc geninfo_all_blocks=1 00:05:59.001 --rc geninfo_unexecuted_blocks=1 00:05:59.001 00:05:59.001 ' 00:05:59.001 09:38:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:59.001 09:38:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3645660 00:05:59.001 09:38:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3645660 00:05:59.001 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3645660 ']' 00:05:59.001 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.001 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.001 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.001 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.001 09:38:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:59.001 09:38:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.001 [2024-11-27 09:38:14.418447] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:05:59.001 [2024-11-27 09:38:14.418520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645660 ] 00:05:59.263 [2024-11-27 09:38:14.503713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.263 [2024-11-27 09:38:14.539370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.835 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.835 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:59.835 09:38:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:59.835 09:38:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:59.835 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.835 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:59.835 { 00:05:59.835 "filename": "/tmp/spdk_mem_dump.txt" 00:05:59.835 } 00:05:59.835 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.835 09:38:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:59.835 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:59.835 1 heaps totaling size 810.000000 MiB 00:05:59.835 size: 810.000000 MiB heap id: 0 00:05:59.835 end heaps---------- 00:05:59.835 9 mempools totaling size 595.772034 MiB 00:05:59.835 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:59.835 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:59.835 size: 92.545471 MiB name: bdev_io_3645660 00:05:59.835 size: 50.003479 MiB name: msgpool_3645660 00:05:59.835 size: 36.509338 MiB name: fsdev_io_3645660 00:05:59.835 size: 21.763794 MiB name: PDU_Pool 00:05:59.835 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:59.835 size: 4.133484 MiB name: evtpool_3645660 00:05:59.835 size: 0.026123 MiB name: Session_Pool 00:05:59.835 end mempools------- 00:05:59.835 6 memzones totaling size 4.142822 MiB 00:05:59.835 size: 1.000366 MiB name: RG_ring_0_3645660 00:05:59.835 size: 1.000366 MiB name: RG_ring_1_3645660 00:05:59.835 size: 1.000366 MiB name: RG_ring_4_3645660 00:05:59.835 size: 1.000366 MiB name: RG_ring_5_3645660 00:05:59.835 size: 0.125366 MiB name: RG_ring_2_3645660 00:05:59.835 size: 0.015991 MiB name: RG_ring_3_3645660 00:05:59.835 end memzones------- 00:05:59.835 09:38:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:00.096 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:00.096 list of free elements. size: 10.862488 MiB 00:06:00.096 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:00.096 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:00.096 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:00.096 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:00.096 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:00.096 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:00.096 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:00.096 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:00.096 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:00.096 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:00.096 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:00.096 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:00.096 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:00.096 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:00.096 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:00.096 list of standard malloc elements. size: 199.218628 MiB 00:06:00.096 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:00.096 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:00.096 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:00.096 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:00.096 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:00.096 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:00.096 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:00.096 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:00.096 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:00.096 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:00.096 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:00.096 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:00.096 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:00.096 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:00.096 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:00.096 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:00.096 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:00.096 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:00.096 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:00.096 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:00.096 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:00.096 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:00.096 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:00.096 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:00.096 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:00.096 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:00.096 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:00.096 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:00.096 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:00.096 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:00.096 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:00.096 list of memzone associated elements. size: 599.918884 MiB 00:06:00.096 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:00.096 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:00.096 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:00.096 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:00.096 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:00.096 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3645660_0 00:06:00.096 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:00.096 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3645660_0 00:06:00.096 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:00.096 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3645660_0 00:06:00.096 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:00.096 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:00.096 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:00.096 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:00.096 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:00.096 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3645660_0 00:06:00.096 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:00.096 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3645660 00:06:00.096 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:00.096 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3645660 00:06:00.096 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:00.096 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:00.096 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:00.096 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:00.096 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:00.096 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:00.096 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:00.096 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:00.096 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:00.096 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3645660 00:06:00.096 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:00.096 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3645660 00:06:00.096 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:00.096 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3645660 00:06:00.096 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:00.096 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3645660 00:06:00.096 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:00.096 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3645660 00:06:00.096 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:00.096 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3645660 00:06:00.096 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:00.096 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:00.096 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:00.096 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:00.096 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:00.097 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:00.097 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:00.097 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3645660 00:06:00.097 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:00.097 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3645660 00:06:00.097 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:00.097 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:00.097 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:00.097 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:00.097 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:00.097 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3645660 00:06:00.097 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:00.097 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:00.097 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:00.097 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3645660 00:06:00.097 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:00.097 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3645660 00:06:00.097 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:00.097 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3645660 00:06:00.097 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:00.097 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:00.097 09:38:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:00.097 09:38:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3645660 00:06:00.097 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3645660 ']' 00:06:00.097 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3645660 00:06:00.097 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:00.097 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.097 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3645660 00:06:00.097 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.097 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.097 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3645660' 00:06:00.097 killing process with pid 3645660 00:06:00.097 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3645660 00:06:00.097 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3645660 00:06:00.358 00:06:00.358 real 0m1.403s 00:06:00.358 user 0m1.465s 00:06:00.358 sys 0m0.427s 00:06:00.358 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.358 09:38:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.358 ************************************ 00:06:00.358 END TEST dpdk_mem_utility 00:06:00.358 ************************************ 00:06:00.358 09:38:15 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:00.358 09:38:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.358 09:38:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.358 09:38:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.358 ************************************ 00:06:00.358 START TEST event 00:06:00.358 ************************************ 00:06:00.358 09:38:15 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:00.358 * Looking for test storage... 00:06:00.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:00.358 09:38:15 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.358 09:38:15 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.358 09:38:15 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.358 09:38:15 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.358 09:38:15 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.358 09:38:15 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.358 09:38:15 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.620 09:38:15 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.620 09:38:15 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.620 09:38:15 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.620 09:38:15 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.620 09:38:15 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.620 09:38:15 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.620 09:38:15 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.620 09:38:15 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.620 09:38:15 event -- scripts/common.sh@344 -- # case "$op" in 00:06:00.620 09:38:15 event -- scripts/common.sh@345 -- # : 1 00:06:00.620 09:38:15 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.620 09:38:15 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.620 09:38:15 event -- scripts/common.sh@365 -- # decimal 1 00:06:00.620 09:38:15 event -- scripts/common.sh@353 -- # local d=1 00:06:00.620 09:38:15 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.620 09:38:15 event -- scripts/common.sh@355 -- # echo 1 00:06:00.620 09:38:15 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.620 09:38:15 event -- scripts/common.sh@366 -- # decimal 2 00:06:00.620 09:38:15 event -- scripts/common.sh@353 -- # local d=2 00:06:00.620 09:38:15 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.620 09:38:15 event -- scripts/common.sh@355 -- # echo 2 00:06:00.620 09:38:15 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.620 09:38:15 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.620 09:38:15 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.620 09:38:15 event -- scripts/common.sh@368 -- # return 0 00:06:00.620 09:38:15 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.620 09:38:15 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.620 --rc genhtml_branch_coverage=1 00:06:00.620 --rc genhtml_function_coverage=1 00:06:00.620 --rc genhtml_legend=1 00:06:00.620 --rc geninfo_all_blocks=1 00:06:00.620 --rc geninfo_unexecuted_blocks=1 00:06:00.620 00:06:00.620 ' 00:06:00.620 09:38:15 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.620 --rc genhtml_branch_coverage=1 00:06:00.620 --rc genhtml_function_coverage=1 00:06:00.620 --rc genhtml_legend=1 00:06:00.620 --rc geninfo_all_blocks=1 00:06:00.620 --rc geninfo_unexecuted_blocks=1 00:06:00.620 00:06:00.620 ' 00:06:00.620 09:38:15 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.620 --rc genhtml_branch_coverage=1 00:06:00.620 --rc genhtml_function_coverage=1 00:06:00.620 --rc genhtml_legend=1 00:06:00.620 --rc geninfo_all_blocks=1 00:06:00.620 --rc geninfo_unexecuted_blocks=1 00:06:00.620 00:06:00.620 ' 00:06:00.620 09:38:15 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.620 --rc genhtml_branch_coverage=1 00:06:00.620 --rc genhtml_function_coverage=1 00:06:00.620 --rc genhtml_legend=1 00:06:00.620 --rc geninfo_all_blocks=1 00:06:00.620 --rc geninfo_unexecuted_blocks=1 00:06:00.620 00:06:00.620 ' 00:06:00.620 09:38:15 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:00.620 09:38:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:00.620 09:38:15 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.620 09:38:15 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:00.620 09:38:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.620 09:38:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.620 ************************************ 00:06:00.620 START TEST event_perf 00:06:00.620 ************************************ 00:06:00.620 09:38:15 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.620 Running I/O for 1 seconds...[2024-11-27 09:38:15.904558] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:00.620 [2024-11-27 09:38:15.904664] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646008 ] 00:06:00.620 [2024-11-27 09:38:15.996546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.620 [2024-11-27 09:38:16.039444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.620 [2024-11-27 09:38:16.039598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.620 [2024-11-27 09:38:16.039751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.620 Running I/O for 1 seconds...[2024-11-27 09:38:16.039752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.033 00:06:02.033 lcore 0: 175850 00:06:02.033 lcore 1: 175853 00:06:02.033 lcore 2: 175853 00:06:02.033 lcore 3: 175851 00:06:02.033 done. 00:06:02.033 00:06:02.033 real 0m1.185s 00:06:02.033 user 0m4.088s 00:06:02.033 sys 0m0.092s 00:06:02.033 09:38:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.033 09:38:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.033 ************************************ 00:06:02.033 END TEST event_perf 00:06:02.033 ************************************ 00:06:02.033 09:38:17 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:02.033 09:38:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:02.033 09:38:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.033 09:38:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.033 ************************************ 00:06:02.033 START TEST event_reactor 00:06:02.033 ************************************ 00:06:02.033 09:38:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:02.033 [2024-11-27 09:38:17.163619] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:02.033 [2024-11-27 09:38:17.163698] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646359 ] 00:06:02.033 [2024-11-27 09:38:17.250710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.033 [2024-11-27 09:38:17.286724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.058 test_start 00:06:03.058 oneshot 00:06:03.058 tick 100 00:06:03.058 tick 100 00:06:03.058 tick 250 00:06:03.058 tick 100 00:06:03.058 tick 100 00:06:03.058 tick 250 00:06:03.058 tick 100 00:06:03.058 tick 500 00:06:03.058 tick 100 00:06:03.058 tick 100 00:06:03.058 tick 250 00:06:03.058 tick 100 00:06:03.058 tick 100 00:06:03.058 test_end 00:06:03.058 00:06:03.058 real 0m1.170s 00:06:03.058 user 0m1.088s 00:06:03.058 sys 0m0.077s 00:06:03.058 09:38:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.058 09:38:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:03.058 ************************************ 00:06:03.058 END TEST event_reactor 00:06:03.058 ************************************ 00:06:03.058 09:38:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.058 09:38:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:03.058 09:38:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.058 09:38:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.058 ************************************ 00:06:03.058 START TEST event_reactor_perf 00:06:03.058 ************************************ 00:06:03.058 09:38:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.058 [2024-11-27 09:38:18.412212] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:03.058 [2024-11-27 09:38:18.412305] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646707 ] 00:06:03.369 [2024-11-27 09:38:18.503478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.369 [2024-11-27 09:38:18.541492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.310 test_start 00:06:04.310 test_end 00:06:04.310 Performance: 539155 events per second 00:06:04.310 00:06:04.310 real 0m1.177s 00:06:04.310 user 0m1.085s 00:06:04.310 sys 0m0.088s 00:06:04.310 09:38:19 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.310 09:38:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.310 ************************************ 00:06:04.310 END TEST event_reactor_perf 00:06:04.310 ************************************ 00:06:04.310 09:38:19 event -- event/event.sh@49 -- # uname -s 00:06:04.310 09:38:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:04.310 09:38:19 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:04.310 09:38:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.310 09:38:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.310 09:38:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.310 ************************************ 00:06:04.310 START TEST event_scheduler 00:06:04.310 ************************************ 00:06:04.310 09:38:19 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:04.310 * Looking for test storage... 00:06:04.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:04.310 09:38:19 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.310 09:38:19 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.310 09:38:19 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.571 09:38:19 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.571 09:38:19 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.572 09:38:19 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:04.572 09:38:19 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.572 09:38:19 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.572 --rc genhtml_branch_coverage=1 00:06:04.572 --rc genhtml_function_coverage=1 00:06:04.572 --rc genhtml_legend=1 00:06:04.572 --rc geninfo_all_blocks=1 00:06:04.572 --rc geninfo_unexecuted_blocks=1 00:06:04.572 00:06:04.572 ' 00:06:04.572 09:38:19 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.572 --rc genhtml_branch_coverage=1 00:06:04.572 --rc genhtml_function_coverage=1 00:06:04.572 --rc genhtml_legend=1 00:06:04.572 --rc geninfo_all_blocks=1 00:06:04.572 --rc geninfo_unexecuted_blocks=1 00:06:04.572 00:06:04.572 ' 00:06:04.572 09:38:19 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.572 --rc genhtml_branch_coverage=1 00:06:04.572 --rc genhtml_function_coverage=1 00:06:04.572 --rc genhtml_legend=1 00:06:04.572 --rc geninfo_all_blocks=1 00:06:04.572 --rc geninfo_unexecuted_blocks=1 00:06:04.572 00:06:04.572 ' 00:06:04.572 09:38:19 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.572 --rc genhtml_branch_coverage=1 00:06:04.572 --rc genhtml_function_coverage=1 00:06:04.572 --rc genhtml_legend=1 00:06:04.572 --rc geninfo_all_blocks=1 00:06:04.572 --rc geninfo_unexecuted_blocks=1 00:06:04.572 00:06:04.572 ' 00:06:04.572 09:38:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:04.572 09:38:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3647063 00:06:04.572 09:38:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.572 09:38:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3647063 00:06:04.572 09:38:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:04.572 09:38:19 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3647063 ']' 00:06:04.572 09:38:19 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.572 09:38:19 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.572 09:38:19 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.572 09:38:19 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.572 09:38:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.572 [2024-11-27 09:38:19.903515] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:04.572 [2024-11-27 09:38:19.903594] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647063 ] 00:06:04.572 [2024-11-27 09:38:19.996879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.833 [2024-11-27 09:38:20.054944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.833 [2024-11-27 09:38:20.055106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.833 [2024-11-27 09:38:20.055274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.833 [2024-11-27 09:38:20.055457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.405 09:38:20 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.405 09:38:20 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:05.405 09:38:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:05.405 09:38:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.405 09:38:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.405 [2024-11-27 09:38:20.725760] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:05.405 [2024-11-27 09:38:20.725778] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:05.405 [2024-11-27 09:38:20.725789] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:05.405 [2024-11-27 09:38:20.725795] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:05.405 [2024-11-27 09:38:20.725801] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:05.405 09:38:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.405 09:38:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:05.405 09:38:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.405 09:38:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.405 [2024-11-27 09:38:20.788443] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:05.405 09:38:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.405 09:38:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:05.405 09:38:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.405 09:38:20 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.405 09:38:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.405 ************************************ 00:06:05.405 START TEST scheduler_create_thread 00:06:05.405 ************************************ 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.405 2 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.405 3 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.405 4 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.405 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.667 5 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.667 6 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.667 7 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.667 8 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.667 9 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.667 09:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.929 10 00:06:05.929 09:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.929 09:38:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:05.929 09:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.929 09:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.314 09:38:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.314 09:38:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:07.314 09:38:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:07.314 09:38:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.314 09:38:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.258 09:38:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.258 09:38:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:08.258 09:38:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.258 09:38:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.831 09:38:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.831 09:38:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:08.831 09:38:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:08.831 09:38:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.831 09:38:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.776 09:38:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.776 00:06:09.776 real 0m4.224s 00:06:09.776 user 0m0.025s 00:06:09.776 sys 0m0.007s 00:06:09.776 09:38:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.776 09:38:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.776 ************************************ 00:06:09.776 END TEST scheduler_create_thread 00:06:09.776 ************************************ 00:06:09.776 09:38:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:09.776 09:38:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3647063 00:06:09.776 09:38:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3647063 ']' 00:06:09.776 09:38:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3647063 00:06:09.776 09:38:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:09.776 09:38:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.776 09:38:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3647063 00:06:09.776 09:38:25 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:09.776 09:38:25 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:09.776 09:38:25 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3647063' 00:06:09.776 killing process with pid 3647063 00:06:09.776 09:38:25 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3647063 00:06:09.776 09:38:25 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3647063 00:06:10.037 [2024-11-27 09:38:25.334088] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:10.037 00:06:10.037 real 0m5.840s 00:06:10.037 user 0m12.891s 00:06:10.037 sys 0m0.433s 00:06:10.037 09:38:25 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.037 09:38:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.037 ************************************ 00:06:10.037 END TEST event_scheduler 00:06:10.037 ************************************ 00:06:10.298 09:38:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:10.298 09:38:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:10.298 09:38:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.298 09:38:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.298 09:38:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.298 ************************************ 00:06:10.298 START TEST app_repeat 00:06:10.298 ************************************ 00:06:10.298 09:38:25 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3648172 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3648172' 00:06:10.298 Process app_repeat pid: 3648172 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:10.298 spdk_app_start Round 0 00:06:10.298 09:38:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3648172 /var/tmp/spdk-nbd.sock 00:06:10.298 09:38:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3648172 ']' 00:06:10.298 09:38:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.298 09:38:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.298 09:38:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.298 09:38:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.298 09:38:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.298 [2024-11-27 09:38:25.616879] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:10.298 [2024-11-27 09:38:25.616973] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648172 ] 00:06:10.298 [2024-11-27 09:38:25.701964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.298 [2024-11-27 09:38:25.735998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.298 [2024-11-27 09:38:25.736000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.560 09:38:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.560 09:38:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.560 09:38:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.560 Malloc0 00:06:10.560 09:38:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.821 Malloc1 00:06:10.821 09:38:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.821 09:38:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.081 /dev/nbd0 00:06:11.081 09:38:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.081 09:38:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.081 1+0 records in 00:06:11.081 1+0 records out 00:06:11.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027142 s, 15.1 MB/s 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.081 09:38:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:11.081 09:38:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.081 09:38:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.081 09:38:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.341 /dev/nbd1 00:06:11.342 09:38:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.342 09:38:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.342 1+0 records in 00:06:11.342 1+0 records out 00:06:11.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270935 s, 15.1 MB/s 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.342 09:38:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:11.342 09:38:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.342 09:38:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.342 09:38:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.342 09:38:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.342 09:38:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.602 { 00:06:11.602 "nbd_device": "/dev/nbd0", 00:06:11.602 "bdev_name": "Malloc0" 00:06:11.602 }, 00:06:11.602 { 00:06:11.602 "nbd_device": "/dev/nbd1", 00:06:11.602 "bdev_name": "Malloc1" 00:06:11.602 } 00:06:11.602 ]' 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.602 { 00:06:11.602 "nbd_device": "/dev/nbd0", 00:06:11.602 "bdev_name": "Malloc0" 00:06:11.602 }, 00:06:11.602 { 00:06:11.602 "nbd_device": "/dev/nbd1", 00:06:11.602 "bdev_name": "Malloc1" 00:06:11.602 } 00:06:11.602 ]' 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.602 /dev/nbd1' 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.602 /dev/nbd1' 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.602 09:38:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.603 256+0 records in 00:06:11.603 256+0 records out 00:06:11.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118942 s, 88.2 MB/s 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.603 256+0 records in 00:06:11.603 256+0 records out 00:06:11.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119465 s, 87.8 MB/s 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.603 256+0 records in 00:06:11.603 256+0 records out 00:06:11.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130739 s, 80.2 MB/s 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.603 09:38:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.863 09:38:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.864 09:38:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.864 09:38:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.864 09:38:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.864 09:38:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.864 09:38:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.864 09:38:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.864 09:38:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.864 09:38:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.864 09:38:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.135 09:38:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.135 09:38:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.135 09:38:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.135 09:38:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.135 09:38:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.135 09:38:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.135 09:38:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.135 09:38:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.135 09:38:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.136 09:38:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.136 09:38:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.136 09:38:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.136 09:38:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.136 09:38:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.398 09:38:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.398 09:38:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.398 09:38:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.398 09:38:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.398 09:38:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.398 09:38:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.398 09:38:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.398 09:38:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.398 09:38:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.398 09:38:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.398 09:38:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.660 [2024-11-27 09:38:27.885782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.660 [2024-11-27 09:38:27.914844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.660 [2024-11-27 09:38:27.914845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.660 [2024-11-27 09:38:27.944006] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.660 [2024-11-27 09:38:27.944038] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.958 09:38:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.958 09:38:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:15.958 spdk_app_start Round 1 00:06:15.958 09:38:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3648172 /var/tmp/spdk-nbd.sock 00:06:15.958 09:38:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3648172 ']' 00:06:15.958 09:38:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.958 09:38:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.958 09:38:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.958 09:38:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.958 09:38:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.958 09:38:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.958 09:38:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:15.958 09:38:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.958 Malloc0 00:06:15.958 09:38:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.958 Malloc1 00:06:15.958 09:38:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.958 09:38:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.219 /dev/nbd0 00:06:16.219 09:38:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.219 09:38:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.219 1+0 records in 00:06:16.219 1+0 records out 00:06:16.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315036 s, 13.0 MB/s 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.219 09:38:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.219 09:38:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.219 09:38:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.219 09:38:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.479 /dev/nbd1 00:06:16.479 09:38:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.479 09:38:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.479 1+0 records in 00:06:16.479 1+0 records out 00:06:16.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279941 s, 14.6 MB/s 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.479 09:38:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.479 09:38:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.479 09:38:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.479 09:38:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.479 09:38:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.479 09:38:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.740 { 00:06:16.740 "nbd_device": "/dev/nbd0", 00:06:16.740 "bdev_name": "Malloc0" 00:06:16.740 }, 00:06:16.740 { 00:06:16.740 "nbd_device": "/dev/nbd1", 00:06:16.740 "bdev_name": "Malloc1" 00:06:16.740 } 00:06:16.740 ]' 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.740 { 00:06:16.740 "nbd_device": "/dev/nbd0", 00:06:16.740 "bdev_name": "Malloc0" 00:06:16.740 }, 00:06:16.740 { 00:06:16.740 "nbd_device": "/dev/nbd1", 00:06:16.740 "bdev_name": "Malloc1" 00:06:16.740 } 00:06:16.740 ]' 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.740 /dev/nbd1' 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.740 /dev/nbd1' 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.740 256+0 records in 00:06:16.740 256+0 records out 00:06:16.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116244 s, 90.2 MB/s 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.740 256+0 records in 00:06:16.740 256+0 records out 00:06:16.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120256 s, 87.2 MB/s 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.740 256+0 records in 00:06:16.740 256+0 records out 00:06:16.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127978 s, 81.9 MB/s 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.740 09:38:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.001 09:38:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.001 09:38:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.001 09:38:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.001 09:38:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.001 09:38:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.001 09:38:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.002 09:38:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.002 09:38:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.002 09:38:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.002 09:38:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.262 09:38:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.523 09:38:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.523 09:38:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.523 09:38:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.523 09:38:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.523 09:38:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.523 09:38:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.523 09:38:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.523 09:38:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.523 09:38:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.523 09:38:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.523 09:38:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.785 [2024-11-27 09:38:33.031462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.785 [2024-11-27 09:38:33.061676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.785 [2024-11-27 09:38:33.061677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.785 [2024-11-27 09:38:33.091422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.785 [2024-11-27 09:38:33.091451] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.085 09:38:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:21.085 09:38:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:21.085 spdk_app_start Round 2 00:06:21.085 09:38:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3648172 /var/tmp/spdk-nbd.sock 00:06:21.085 09:38:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3648172 ']' 00:06:21.085 09:38:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.085 09:38:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.085 09:38:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.085 09:38:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.085 09:38:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.085 09:38:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.085 09:38:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:21.085 09:38:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.085 Malloc0 00:06:21.085 09:38:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.085 Malloc1 00:06:21.085 09:38:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.085 09:38:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.345 /dev/nbd0 00:06:21.345 09:38:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.345 09:38:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.345 1+0 records in 00:06:21.345 1+0 records out 00:06:21.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273451 s, 15.0 MB/s 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.345 09:38:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:21.345 09:38:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.345 09:38:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.345 09:38:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.606 /dev/nbd1 00:06:21.606 09:38:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.606 09:38:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.606 1+0 records in 00:06:21.606 1+0 records out 00:06:21.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002973 s, 13.8 MB/s 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.606 09:38:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:21.606 09:38:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.606 09:38:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.606 09:38:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.606 09:38:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.606 09:38:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.866 { 00:06:21.866 "nbd_device": "/dev/nbd0", 00:06:21.866 "bdev_name": "Malloc0" 00:06:21.866 }, 00:06:21.866 { 00:06:21.866 "nbd_device": "/dev/nbd1", 00:06:21.866 "bdev_name": "Malloc1" 00:06:21.866 } 00:06:21.866 ]' 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.866 { 00:06:21.866 "nbd_device": "/dev/nbd0", 00:06:21.866 "bdev_name": "Malloc0" 00:06:21.866 }, 00:06:21.866 { 00:06:21.866 "nbd_device": "/dev/nbd1", 00:06:21.866 "bdev_name": "Malloc1" 00:06:21.866 } 00:06:21.866 ]' 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.866 /dev/nbd1' 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.866 /dev/nbd1' 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.866 256+0 records in 00:06:21.866 256+0 records out 00:06:21.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127202 s, 82.4 MB/s 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.866 256+0 records in 00:06:21.866 256+0 records out 00:06:21.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123168 s, 85.1 MB/s 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.866 256+0 records in 00:06:21.866 256+0 records out 00:06:21.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129452 s, 81.0 MB/s 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.866 09:38:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.127 09:38:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.127 09:38:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.127 09:38:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.127 09:38:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.127 09:38:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.127 09:38:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.127 09:38:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.127 09:38:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.127 09:38:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.127 09:38:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.390 09:38:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.390 09:38:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.390 09:38:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.390 09:38:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.390 09:38:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.390 09:38:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.390 09:38:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.390 09:38:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.390 09:38:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.390 09:38:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.390 09:38:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.651 09:38:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.651 09:38:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.651 09:38:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.651 09:38:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.651 09:38:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.651 09:38:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.651 09:38:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.651 09:38:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.651 09:38:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.651 09:38:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.651 09:38:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.651 09:38:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.651 09:38:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.651 09:38:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.912 [2024-11-27 09:38:38.191401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.912 [2024-11-27 09:38:38.220450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.912 [2024-11-27 09:38:38.220538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.912 [2024-11-27 09:38:38.249751] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.912 [2024-11-27 09:38:38.249783] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.216 09:38:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3648172 /var/tmp/spdk-nbd.sock 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3648172 ']' 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:26.216 09:38:41 event.app_repeat -- event/event.sh@39 -- # killprocess 3648172 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3648172 ']' 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3648172 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3648172 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3648172' 00:06:26.216 killing process with pid 3648172 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3648172 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3648172 00:06:26.216 spdk_app_start is called in Round 0. 00:06:26.216 Shutdown signal received, stop current app iteration 00:06:26.216 Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 reinitialization... 00:06:26.216 spdk_app_start is called in Round 1. 00:06:26.216 Shutdown signal received, stop current app iteration 00:06:26.216 Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 reinitialization... 00:06:26.216 spdk_app_start is called in Round 2. 00:06:26.216 Shutdown signal received, stop current app iteration 00:06:26.216 Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 reinitialization... 00:06:26.216 spdk_app_start is called in Round 3. 00:06:26.216 Shutdown signal received, stop current app iteration 00:06:26.216 09:38:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:26.216 09:38:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:26.216 00:06:26.216 real 0m15.879s 00:06:26.216 user 0m34.871s 00:06:26.216 sys 0m2.337s 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.216 09:38:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.216 ************************************ 00:06:26.216 END TEST app_repeat 00:06:26.216 ************************************ 00:06:26.216 09:38:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:26.216 09:38:41 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:26.216 09:38:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.216 09:38:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.216 09:38:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.216 ************************************ 00:06:26.216 START TEST cpu_locks 00:06:26.216 ************************************ 00:06:26.216 09:38:41 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:26.216 * Looking for test storage... 00:06:26.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:26.216 09:38:41 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.216 09:38:41 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.216 09:38:41 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.478 09:38:41 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.478 09:38:41 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:26.478 09:38:41 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.478 09:38:41 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.478 --rc genhtml_branch_coverage=1 00:06:26.478 --rc genhtml_function_coverage=1 00:06:26.478 --rc genhtml_legend=1 00:06:26.478 --rc geninfo_all_blocks=1 00:06:26.478 --rc geninfo_unexecuted_blocks=1 00:06:26.478 00:06:26.478 ' 00:06:26.478 09:38:41 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.478 --rc genhtml_branch_coverage=1 00:06:26.478 --rc genhtml_function_coverage=1 00:06:26.478 --rc genhtml_legend=1 00:06:26.478 --rc geninfo_all_blocks=1 00:06:26.478 --rc geninfo_unexecuted_blocks=1 00:06:26.478 00:06:26.478 ' 00:06:26.478 09:38:41 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.478 --rc genhtml_branch_coverage=1 00:06:26.478 --rc genhtml_function_coverage=1 00:06:26.478 --rc genhtml_legend=1 00:06:26.478 --rc geninfo_all_blocks=1 00:06:26.478 --rc geninfo_unexecuted_blocks=1 00:06:26.478 00:06:26.478 ' 00:06:26.478 09:38:41 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.478 --rc genhtml_branch_coverage=1 00:06:26.478 --rc genhtml_function_coverage=1 00:06:26.478 --rc genhtml_legend=1 00:06:26.478 --rc geninfo_all_blocks=1 00:06:26.478 --rc geninfo_unexecuted_blocks=1 00:06:26.478 00:06:26.478 ' 00:06:26.478 09:38:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:26.478 09:38:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:26.478 09:38:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:26.478 09:38:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:26.478 09:38:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.478 09:38:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.478 09:38:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.478 ************************************ 00:06:26.478 START TEST default_locks 00:06:26.478 ************************************ 00:06:26.478 09:38:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:26.478 09:38:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3651717 00:06:26.478 09:38:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3651717 00:06:26.478 09:38:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.478 09:38:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3651717 ']' 00:06:26.478 09:38:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.478 09:38:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.478 09:38:41 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.478 09:38:41 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.478 09:38:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.478 [2024-11-27 09:38:41.836808] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:26.478 [2024-11-27 09:38:41.836872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651717 ] 00:06:26.478 [2024-11-27 09:38:41.925927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.739 [2024-11-27 09:38:41.965510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.311 09:38:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.311 09:38:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:27.311 09:38:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3651717 00:06:27.311 09:38:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.311 09:38:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3651717 00:06:27.883 lslocks: write error 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3651717 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3651717 ']' 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3651717 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3651717 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3651717' 00:06:27.883 killing process with pid 3651717 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3651717 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3651717 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3651717 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3651717 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3651717 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3651717 ']' 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.883 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3651717) - No such process 00:06:27.884 ERROR: process (pid: 3651717) is no longer running 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.884 00:06:27.884 real 0m1.547s 00:06:27.884 user 0m1.681s 00:06:27.884 sys 0m0.538s 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.884 09:38:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.884 ************************************ 00:06:27.884 END TEST default_locks 00:06:27.884 ************************************ 00:06:28.145 09:38:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:28.145 09:38:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.145 09:38:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.145 09:38:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.145 ************************************ 00:06:28.145 START TEST default_locks_via_rpc 00:06:28.145 ************************************ 00:06:28.145 09:38:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:28.145 09:38:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3652051 00:06:28.145 09:38:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3652051 00:06:28.145 09:38:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.145 09:38:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3652051 ']' 00:06:28.145 09:38:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.145 09:38:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.145 09:38:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.145 09:38:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.145 09:38:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.145 [2024-11-27 09:38:43.459946] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:28.145 [2024-11-27 09:38:43.460006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652051 ] 00:06:28.145 [2024-11-27 09:38:43.544312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.145 [2024-11-27 09:38:43.577330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3652051 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3652051 00:06:29.087 09:38:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.348 09:38:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3652051 00:06:29.348 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3652051 ']' 00:06:29.348 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3652051 00:06:29.348 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:29.348 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.348 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3652051 00:06:29.348 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.348 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.348 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3652051' 00:06:29.348 killing process with pid 3652051 00:06:29.348 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3652051 00:06:29.348 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3652051 00:06:29.610 00:06:29.610 real 0m1.532s 00:06:29.610 user 0m1.653s 00:06:29.610 sys 0m0.540s 00:06:29.610 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.610 09:38:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.610 ************************************ 00:06:29.610 END TEST default_locks_via_rpc 00:06:29.610 ************************************ 00:06:29.610 09:38:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:29.610 09:38:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.610 09:38:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.610 09:38:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.610 ************************************ 00:06:29.610 START TEST non_locking_app_on_locked_coremask 00:06:29.610 ************************************ 00:06:29.610 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:29.610 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3652378 00:06:29.610 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3652378 /var/tmp/spdk.sock 00:06:29.610 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.610 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3652378 ']' 00:06:29.610 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.610 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.610 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.610 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.610 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.610 [2024-11-27 09:38:45.068187] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:29.610 [2024-11-27 09:38:45.068243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652378 ] 00:06:29.871 [2024-11-27 09:38:45.152585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.871 [2024-11-27 09:38:45.184865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.443 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.443 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.443 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3652506 00:06:30.443 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3652506 /var/tmp/spdk2.sock 00:06:30.443 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:30.443 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3652506 ']' 00:06:30.443 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.443 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.443 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.443 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.443 09:38:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.443 [2024-11-27 09:38:45.904373] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:30.443 [2024-11-27 09:38:45.904426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652506 ] 00:06:30.703 [2024-11-27 09:38:45.992030] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.703 [2024-11-27 09:38:45.992052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.703 [2024-11-27 09:38:46.050305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.273 09:38:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.274 09:38:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.274 09:38:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3652378 00:06:31.274 09:38:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3652378 00:06:31.274 09:38:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.214 lslocks: write error 00:06:32.214 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3652378 00:06:32.214 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3652378 ']' 00:06:32.214 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3652378 00:06:32.214 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:32.214 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.214 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3652378 00:06:32.214 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.214 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.214 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3652378' 00:06:32.214 killing process with pid 3652378 00:06:32.214 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3652378 00:06:32.214 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3652378 00:06:32.475 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3652506 00:06:32.475 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3652506 ']' 00:06:32.475 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3652506 00:06:32.475 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:32.475 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.475 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3652506 00:06:32.475 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.475 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.475 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3652506' 00:06:32.475 killing process with pid 3652506 00:06:32.475 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3652506 00:06:32.475 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3652506 00:06:32.736 00:06:32.736 real 0m2.986s 00:06:32.736 user 0m3.312s 00:06:32.736 sys 0m0.953s 00:06:32.736 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.736 09:38:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.736 ************************************ 00:06:32.736 END TEST non_locking_app_on_locked_coremask 00:06:32.736 ************************************ 00:06:32.736 09:38:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:32.736 09:38:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.736 09:38:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.736 09:38:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.736 ************************************ 00:06:32.736 START TEST locking_app_on_unlocked_coremask 00:06:32.736 ************************************ 00:06:32.736 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:32.736 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3652923 00:06:32.736 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3652923 /var/tmp/spdk.sock 00:06:32.736 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:32.736 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3652923 ']' 00:06:32.736 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.736 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.736 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.736 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.736 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.736 [2024-11-27 09:38:48.136305] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:32.736 [2024-11-27 09:38:48.136362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652923 ] 00:06:32.996 [2024-11-27 09:38:48.221312] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.996 [2024-11-27 09:38:48.221338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.996 [2024-11-27 09:38:48.254093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.566 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.566 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.566 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3653212 00:06:33.566 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3653212 /var/tmp/spdk2.sock 00:06:33.566 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3653212 ']' 00:06:33.566 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.566 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.566 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.566 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.566 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.566 09:38:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.566 [2024-11-27 09:38:48.967368] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:33.566 [2024-11-27 09:38:48.967422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653212 ] 00:06:33.826 [2024-11-27 09:38:49.054970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.826 [2024-11-27 09:38:49.113181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.396 09:38:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.396 09:38:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.396 09:38:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3653212 00:06:34.396 09:38:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3653212 00:06:34.396 09:38:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.968 lslocks: write error 00:06:34.968 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3652923 00:06:34.968 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3652923 ']' 00:06:34.968 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3652923 00:06:34.968 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.968 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.968 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3652923 00:06:34.968 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.968 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.968 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3652923' 00:06:34.968 killing process with pid 3652923 00:06:34.968 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3652923 00:06:34.968 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3652923 00:06:35.540 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3653212 00:06:35.540 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3653212 ']' 00:06:35.540 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3653212 00:06:35.540 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:35.540 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.540 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3653212 00:06:35.540 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.540 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.540 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3653212' 00:06:35.540 killing process with pid 3653212 00:06:35.540 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3653212 00:06:35.540 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3653212 00:06:35.540 00:06:35.540 real 0m2.890s 00:06:35.540 user 0m3.209s 00:06:35.540 sys 0m0.914s 00:06:35.540 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.540 09:38:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.540 ************************************ 00:06:35.540 END TEST locking_app_on_unlocked_coremask 00:06:35.540 ************************************ 00:06:35.540 09:38:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.540 09:38:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.540 09:38:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.540 09:38:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.802 ************************************ 00:06:35.802 START TEST locking_app_on_locked_coremask 00:06:35.802 ************************************ 00:06:35.802 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:35.802 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3653590 00:06:35.802 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3653590 /var/tmp/spdk.sock 00:06:35.802 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.802 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3653590 ']' 00:06:35.802 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.802 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.802 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.802 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.802 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.802 [2024-11-27 09:38:51.098724] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:35.802 [2024-11-27 09:38:51.098772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653590 ] 00:06:35.802 [2024-11-27 09:38:51.181140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.802 [2024-11-27 09:38:51.212448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.746 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.746 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.746 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3653864 00:06:36.746 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3653864 /var/tmp/spdk2.sock 00:06:36.746 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:36.747 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.747 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3653864 /var/tmp/spdk2.sock 00:06:36.747 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:36.747 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.747 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:36.747 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.747 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3653864 /var/tmp/spdk2.sock 00:06:36.747 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3653864 ']' 00:06:36.747 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.747 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.747 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.747 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.747 09:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.747 [2024-11-27 09:38:51.945348] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:36.747 [2024-11-27 09:38:51.945400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653864 ] 00:06:36.747 [2024-11-27 09:38:52.033413] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3653590 has claimed it. 00:06:36.747 [2024-11-27 09:38:52.033448] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3653864) - No such process 00:06:37.366 ERROR: process (pid: 3653864) is no longer running 00:06:37.366 09:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.366 09:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:37.366 09:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:37.366 09:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.366 09:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:37.366 09:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.366 09:38:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3653590 00:06:37.366 09:38:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3653590 00:06:37.366 09:38:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.642 lslocks: write error 00:06:37.642 09:38:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3653590 00:06:37.642 09:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3653590 ']' 00:06:37.642 09:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3653590 00:06:37.642 09:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.642 09:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.642 09:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3653590 00:06:37.916 09:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.916 09:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.916 09:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3653590' 00:06:37.916 killing process with pid 3653590 00:06:37.916 09:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3653590 00:06:37.916 09:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3653590 00:06:37.916 00:06:37.916 real 0m2.251s 00:06:37.916 user 0m2.541s 00:06:37.917 sys 0m0.646s 00:06:37.917 09:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.917 09:38:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.917 ************************************ 00:06:37.917 END TEST locking_app_on_locked_coremask 00:06:37.917 ************************************ 00:06:37.917 09:38:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.917 09:38:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.917 09:38:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.917 09:38:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.917 ************************************ 00:06:37.917 START TEST locking_overlapped_coremask 00:06:37.917 ************************************ 00:06:37.917 09:38:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:37.917 09:38:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3654131 00:06:37.917 09:38:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3654131 /var/tmp/spdk.sock 00:06:37.917 09:38:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.917 09:38:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3654131 ']' 00:06:37.917 09:38:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.917 09:38:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.917 09:38:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.917 09:38:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.917 09:38:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.177 [2024-11-27 09:38:53.420914] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:38.177 [2024-11-27 09:38:53.420971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3654131 ] 00:06:38.177 [2024-11-27 09:38:53.508247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.177 [2024-11-27 09:38:53.542590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.177 [2024-11-27 09:38:53.542739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.177 [2024-11-27 09:38:53.542742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3654306 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3654306 /var/tmp/spdk2.sock 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3654306 /var/tmp/spdk2.sock 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3654306 /var/tmp/spdk2.sock 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3654306 ']' 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.118 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.118 [2024-11-27 09:38:54.281718] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:39.118 [2024-11-27 09:38:54.281771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3654306 ] 00:06:39.118 [2024-11-27 09:38:54.395830] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3654131 has claimed it. 00:06:39.118 [2024-11-27 09:38:54.395876] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3654306) - No such process 00:06:39.689 ERROR: process (pid: 3654306) is no longer running 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3654131 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3654131 ']' 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3654131 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3654131 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3654131' 00:06:39.689 killing process with pid 3654131 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3654131 00:06:39.689 09:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3654131 00:06:39.689 00:06:39.689 real 0m1.787s 00:06:39.689 user 0m5.171s 00:06:39.689 sys 0m0.398s 00:06:39.689 09:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.689 09:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.689 ************************************ 00:06:39.689 END TEST locking_overlapped_coremask 00:06:39.689 ************************************ 00:06:39.950 09:38:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:39.951 09:38:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.951 09:38:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.951 09:38:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.951 ************************************ 00:06:39.951 START TEST locking_overlapped_coremask_via_rpc 00:06:39.951 ************************************ 00:06:39.951 09:38:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:39.951 09:38:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3654580 00:06:39.951 09:38:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3654580 /var/tmp/spdk.sock 00:06:39.951 09:38:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:39.951 09:38:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3654580 ']' 00:06:39.951 09:38:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.951 09:38:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.951 09:38:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.951 09:38:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.951 09:38:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.951 [2024-11-27 09:38:55.287761] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:39.951 [2024-11-27 09:38:55.287817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3654580 ] 00:06:39.951 [2024-11-27 09:38:55.374435] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.951 [2024-11-27 09:38:55.374463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.951 [2024-11-27 09:38:55.409924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.951 [2024-11-27 09:38:55.410076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.951 [2024-11-27 09:38:55.410079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.892 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.892 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:40.892 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3654683 00:06:40.892 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3654683 /var/tmp/spdk2.sock 00:06:40.892 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3654683 ']' 00:06:40.892 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:40.892 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.892 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.892 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.892 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.892 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.892 [2024-11-27 09:38:56.141049] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:40.892 [2024-11-27 09:38:56.141099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3654683 ] 00:06:40.892 [2024-11-27 09:38:56.252238] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.892 [2024-11-27 09:38:56.252268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.892 [2024-11-27 09:38:56.326711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.892 [2024-11-27 09:38:56.330281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.892 [2024-11-27 09:38:56.330281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.576 [2024-11-27 09:38:56.942239] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3654580 has claimed it. 00:06:41.576 request: 00:06:41.576 { 00:06:41.576 "method": "framework_enable_cpumask_locks", 00:06:41.576 "req_id": 1 00:06:41.576 } 00:06:41.576 Got JSON-RPC error response 00:06:41.576 response: 00:06:41.576 { 00:06:41.576 "code": -32603, 00:06:41.576 "message": "Failed to claim CPU core: 2" 00:06:41.576 } 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3654580 /var/tmp/spdk.sock 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3654580 ']' 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.576 09:38:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.836 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.836 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.836 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3654683 /var/tmp/spdk2.sock 00:06:41.836 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3654683 ']' 00:06:41.837 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.837 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.837 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.837 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.837 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.096 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.096 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:42.096 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:42.096 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.096 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.096 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.096 00:06:42.096 real 0m2.081s 00:06:42.096 user 0m0.862s 00:06:42.096 sys 0m0.151s 00:06:42.096 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.097 09:38:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.097 ************************************ 00:06:42.097 END TEST locking_overlapped_coremask_via_rpc 00:06:42.097 ************************************ 00:06:42.097 09:38:57 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:42.097 09:38:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3654580 ]] 00:06:42.097 09:38:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3654580 00:06:42.097 09:38:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3654580 ']' 00:06:42.097 09:38:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3654580 00:06:42.097 09:38:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:42.097 09:38:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.097 09:38:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3654580 00:06:42.097 09:38:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.097 09:38:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.097 09:38:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3654580' 00:06:42.097 killing process with pid 3654580 00:06:42.097 09:38:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3654580 00:06:42.097 09:38:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3654580 00:06:42.357 09:38:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3654683 ]] 00:06:42.357 09:38:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3654683 00:06:42.357 09:38:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3654683 ']' 00:06:42.357 09:38:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3654683 00:06:42.357 09:38:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:42.357 09:38:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.357 09:38:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3654683 00:06:42.357 09:38:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:42.357 09:38:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:42.357 09:38:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3654683' 00:06:42.357 killing process with pid 3654683 00:06:42.357 09:38:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3654683 00:06:42.357 09:38:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3654683 00:06:42.618 09:38:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.618 09:38:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:42.618 09:38:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3654580 ]] 00:06:42.618 09:38:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3654580 00:06:42.618 09:38:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3654580 ']' 00:06:42.618 09:38:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3654580 00:06:42.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3654580) - No such process 00:06:42.618 09:38:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3654580 is not found' 00:06:42.618 Process with pid 3654580 is not found 00:06:42.618 09:38:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3654683 ]] 00:06:42.618 09:38:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3654683 00:06:42.618 09:38:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3654683 ']' 00:06:42.618 09:38:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3654683 00:06:42.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3654683) - No such process 00:06:42.618 09:38:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3654683 is not found' 00:06:42.618 Process with pid 3654683 is not found 00:06:42.618 09:38:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.618 00:06:42.618 real 0m16.337s 00:06:42.618 user 0m28.410s 00:06:42.618 sys 0m5.104s 00:06:42.618 09:38:57 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.618 09:38:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.618 ************************************ 00:06:42.618 END TEST cpu_locks 00:06:42.618 ************************************ 00:06:42.618 00:06:42.618 real 0m42.265s 00:06:42.618 user 1m22.729s 00:06:42.618 sys 0m8.549s 00:06:42.618 09:38:57 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.618 09:38:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.618 ************************************ 00:06:42.618 END TEST event 00:06:42.618 ************************************ 00:06:42.618 09:38:57 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:42.618 09:38:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.618 09:38:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.618 09:38:57 -- common/autotest_common.sh@10 -- # set +x 00:06:42.618 ************************************ 00:06:42.618 START TEST thread 00:06:42.618 ************************************ 00:06:42.618 09:38:57 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:42.618 * Looking for test storage... 00:06:42.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:42.880 09:38:58 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.880 09:38:58 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.880 09:38:58 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.880 09:38:58 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.880 09:38:58 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.880 09:38:58 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.880 09:38:58 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.880 09:38:58 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.880 09:38:58 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.880 09:38:58 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.880 09:38:58 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.880 09:38:58 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.880 09:38:58 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.880 09:38:58 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.880 09:38:58 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.880 09:38:58 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:42.880 09:38:58 thread -- scripts/common.sh@345 -- # : 1 00:06:42.880 09:38:58 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.880 09:38:58 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.880 09:38:58 thread -- scripts/common.sh@365 -- # decimal 1 00:06:42.880 09:38:58 thread -- scripts/common.sh@353 -- # local d=1 00:06:42.880 09:38:58 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.880 09:38:58 thread -- scripts/common.sh@355 -- # echo 1 00:06:42.880 09:38:58 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.880 09:38:58 thread -- scripts/common.sh@366 -- # decimal 2 00:06:42.880 09:38:58 thread -- scripts/common.sh@353 -- # local d=2 00:06:42.880 09:38:58 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.880 09:38:58 thread -- scripts/common.sh@355 -- # echo 2 00:06:42.880 09:38:58 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.880 09:38:58 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.880 09:38:58 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.880 09:38:58 thread -- scripts/common.sh@368 -- # return 0 00:06:42.880 09:38:58 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.880 09:38:58 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.880 --rc genhtml_branch_coverage=1 00:06:42.880 --rc genhtml_function_coverage=1 00:06:42.880 --rc genhtml_legend=1 00:06:42.880 --rc geninfo_all_blocks=1 00:06:42.880 --rc geninfo_unexecuted_blocks=1 00:06:42.881 00:06:42.881 ' 00:06:42.881 09:38:58 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.881 --rc genhtml_branch_coverage=1 00:06:42.881 --rc genhtml_function_coverage=1 00:06:42.881 --rc genhtml_legend=1 00:06:42.881 --rc geninfo_all_blocks=1 00:06:42.881 --rc geninfo_unexecuted_blocks=1 00:06:42.881 00:06:42.881 ' 00:06:42.881 09:38:58 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.881 --rc genhtml_branch_coverage=1 00:06:42.881 --rc genhtml_function_coverage=1 00:06:42.881 --rc genhtml_legend=1 00:06:42.881 --rc geninfo_all_blocks=1 00:06:42.881 --rc geninfo_unexecuted_blocks=1 00:06:42.881 00:06:42.881 ' 00:06:42.881 09:38:58 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.881 --rc genhtml_branch_coverage=1 00:06:42.881 --rc genhtml_function_coverage=1 00:06:42.881 --rc genhtml_legend=1 00:06:42.881 --rc geninfo_all_blocks=1 00:06:42.881 --rc geninfo_unexecuted_blocks=1 00:06:42.881 00:06:42.881 ' 00:06:42.881 09:38:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.881 09:38:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:42.881 09:38:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.881 09:38:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.881 ************************************ 00:06:42.881 START TEST thread_poller_perf 00:06:42.881 ************************************ 00:06:42.881 09:38:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.881 [2024-11-27 09:38:58.249471] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:42.881 [2024-11-27 09:38:58.249576] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3655192 ] 00:06:42.881 [2024-11-27 09:38:58.335340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.142 [2024-11-27 09:38:58.368717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.142 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:44.083 [2024-11-27T08:38:59.549Z] ====================================== 00:06:44.083 [2024-11-27T08:38:59.549Z] busy:2409400676 (cyc) 00:06:44.083 [2024-11-27T08:38:59.549Z] total_run_count: 413000 00:06:44.083 [2024-11-27T08:38:59.549Z] tsc_hz: 2400000000 (cyc) 00:06:44.083 [2024-11-27T08:38:59.549Z] ====================================== 00:06:44.083 [2024-11-27T08:38:59.549Z] poller_cost: 5833 (cyc), 2430 (nsec) 00:06:44.083 00:06:44.083 real 0m1.174s 00:06:44.083 user 0m1.095s 00:06:44.083 sys 0m0.075s 00:06:44.083 09:38:59 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.083 09:38:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.083 ************************************ 00:06:44.083 END TEST thread_poller_perf 00:06:44.083 ************************************ 00:06:44.083 09:38:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:44.083 09:38:59 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:44.083 09:38:59 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.083 09:38:59 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.083 ************************************ 00:06:44.083 START TEST thread_poller_perf 00:06:44.083 ************************************ 00:06:44.083 09:38:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:44.083 [2024-11-27 09:38:59.500331] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:44.083 [2024-11-27 09:38:59.500430] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3655481 ] 00:06:44.344 [2024-11-27 09:38:59.586754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.344 [2024-11-27 09:38:59.616185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.344 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:45.287 [2024-11-27T08:39:00.753Z] ====================================== 00:06:45.287 [2024-11-27T08:39:00.753Z] busy:2401312742 (cyc) 00:06:45.287 [2024-11-27T08:39:00.753Z] total_run_count: 5363000 00:06:45.287 [2024-11-27T08:39:00.753Z] tsc_hz: 2400000000 (cyc) 00:06:45.287 [2024-11-27T08:39:00.753Z] ====================================== 00:06:45.287 [2024-11-27T08:39:00.753Z] poller_cost: 447 (cyc), 186 (nsec) 00:06:45.287 00:06:45.287 real 0m1.166s 00:06:45.287 user 0m1.080s 00:06:45.287 sys 0m0.082s 00:06:45.287 09:39:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.287 09:39:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.287 ************************************ 00:06:45.287 END TEST thread_poller_perf 00:06:45.287 ************************************ 00:06:45.287 09:39:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:45.287 00:06:45.287 real 0m2.695s 00:06:45.287 user 0m2.357s 00:06:45.287 sys 0m0.352s 00:06:45.287 09:39:00 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.287 09:39:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.287 ************************************ 00:06:45.287 END TEST thread 00:06:45.287 ************************************ 00:06:45.287 09:39:00 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:45.287 09:39:00 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.287 09:39:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.287 09:39:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.287 09:39:00 -- common/autotest_common.sh@10 -- # set +x 00:06:45.548 ************************************ 00:06:45.548 START TEST app_cmdline 00:06:45.548 ************************************ 00:06:45.548 09:39:00 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.548 * Looking for test storage... 00:06:45.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:45.548 09:39:00 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.548 09:39:00 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.548 09:39:00 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.548 09:39:00 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.548 09:39:00 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:45.548 09:39:00 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.548 09:39:00 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.548 --rc genhtml_branch_coverage=1 00:06:45.548 --rc genhtml_function_coverage=1 00:06:45.548 --rc genhtml_legend=1 00:06:45.548 --rc geninfo_all_blocks=1 00:06:45.548 --rc geninfo_unexecuted_blocks=1 00:06:45.548 00:06:45.548 ' 00:06:45.548 09:39:00 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.548 --rc genhtml_branch_coverage=1 00:06:45.548 --rc genhtml_function_coverage=1 00:06:45.548 --rc genhtml_legend=1 00:06:45.548 --rc geninfo_all_blocks=1 00:06:45.548 --rc geninfo_unexecuted_blocks=1 00:06:45.548 00:06:45.548 ' 00:06:45.549 09:39:00 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.549 --rc genhtml_branch_coverage=1 00:06:45.549 --rc genhtml_function_coverage=1 00:06:45.549 --rc genhtml_legend=1 00:06:45.549 --rc geninfo_all_blocks=1 00:06:45.549 --rc geninfo_unexecuted_blocks=1 00:06:45.549 00:06:45.549 ' 00:06:45.549 09:39:00 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.549 --rc genhtml_branch_coverage=1 00:06:45.549 --rc genhtml_function_coverage=1 00:06:45.549 --rc genhtml_legend=1 00:06:45.549 --rc geninfo_all_blocks=1 00:06:45.549 --rc geninfo_unexecuted_blocks=1 00:06:45.549 00:06:45.549 ' 00:06:45.549 09:39:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:45.549 09:39:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3655918 00:06:45.549 09:39:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3655918 00:06:45.549 09:39:00 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3655918 ']' 00:06:45.549 09:39:00 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:45.549 09:39:00 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.549 09:39:00 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.549 09:39:00 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.549 09:39:00 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.549 09:39:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.811 [2024-11-27 09:39:01.026133] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:45.811 [2024-11-27 09:39:01.026221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3655918 ] 00:06:45.811 [2024-11-27 09:39:01.113632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.811 [2024-11-27 09:39:01.148296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.384 09:39:01 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.384 09:39:01 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:46.384 09:39:01 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:46.646 { 00:06:46.646 "version": "SPDK v25.01-pre git sha1 c25d82eb4", 00:06:46.646 "fields": { 00:06:46.646 "major": 25, 00:06:46.646 "minor": 1, 00:06:46.646 "patch": 0, 00:06:46.646 "suffix": "-pre", 00:06:46.646 "commit": "c25d82eb4" 00:06:46.646 } 00:06:46.646 } 00:06:46.646 09:39:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:46.646 09:39:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:46.646 09:39:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:46.646 09:39:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:46.646 09:39:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:46.646 09:39:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:46.646 09:39:01 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.646 09:39:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.646 09:39:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:46.646 09:39:01 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.646 09:39:02 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:46.646 09:39:02 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:46.646 09:39:02 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.646 09:39:02 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:46.646 09:39:02 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.646 09:39:02 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.646 09:39:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.646 09:39:02 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.646 09:39:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.646 09:39:02 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.646 09:39:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.646 09:39:02 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.646 09:39:02 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:46.646 09:39:02 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.907 request: 00:06:46.907 { 00:06:46.907 "method": "env_dpdk_get_mem_stats", 00:06:46.907 "req_id": 1 00:06:46.907 } 00:06:46.907 Got JSON-RPC error response 00:06:46.907 response: 00:06:46.907 { 00:06:46.907 "code": -32601, 00:06:46.907 "message": "Method not found" 00:06:46.907 } 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.907 09:39:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3655918 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3655918 ']' 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3655918 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3655918 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3655918' 00:06:46.907 killing process with pid 3655918 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@973 -- # kill 3655918 00:06:46.907 09:39:02 app_cmdline -- common/autotest_common.sh@978 -- # wait 3655918 00:06:47.167 00:06:47.167 real 0m1.679s 00:06:47.167 user 0m1.980s 00:06:47.167 sys 0m0.474s 00:06:47.167 09:39:02 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.167 09:39:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.167 ************************************ 00:06:47.167 END TEST app_cmdline 00:06:47.167 ************************************ 00:06:47.167 09:39:02 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:47.167 09:39:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.167 09:39:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.167 09:39:02 -- common/autotest_common.sh@10 -- # set +x 00:06:47.167 ************************************ 00:06:47.167 START TEST version 00:06:47.167 ************************************ 00:06:47.167 09:39:02 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:47.167 * Looking for test storage... 00:06:47.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:47.167 09:39:02 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.167 09:39:02 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.167 09:39:02 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.428 09:39:02 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.428 09:39:02 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.428 09:39:02 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.428 09:39:02 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.428 09:39:02 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.428 09:39:02 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.428 09:39:02 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.428 09:39:02 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.428 09:39:02 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.428 09:39:02 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.428 09:39:02 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.428 09:39:02 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.428 09:39:02 version -- scripts/common.sh@344 -- # case "$op" in 00:06:47.428 09:39:02 version -- scripts/common.sh@345 -- # : 1 00:06:47.428 09:39:02 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.428 09:39:02 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.428 09:39:02 version -- scripts/common.sh@365 -- # decimal 1 00:06:47.428 09:39:02 version -- scripts/common.sh@353 -- # local d=1 00:06:47.428 09:39:02 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.428 09:39:02 version -- scripts/common.sh@355 -- # echo 1 00:06:47.428 09:39:02 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.428 09:39:02 version -- scripts/common.sh@366 -- # decimal 2 00:06:47.428 09:39:02 version -- scripts/common.sh@353 -- # local d=2 00:06:47.428 09:39:02 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.428 09:39:02 version -- scripts/common.sh@355 -- # echo 2 00:06:47.428 09:39:02 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.428 09:39:02 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.428 09:39:02 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.428 09:39:02 version -- scripts/common.sh@368 -- # return 0 00:06:47.428 09:39:02 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.428 09:39:02 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.428 --rc genhtml_branch_coverage=1 00:06:47.428 --rc genhtml_function_coverage=1 00:06:47.428 --rc genhtml_legend=1 00:06:47.428 --rc geninfo_all_blocks=1 00:06:47.428 --rc geninfo_unexecuted_blocks=1 00:06:47.428 00:06:47.428 ' 00:06:47.428 09:39:02 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.428 --rc genhtml_branch_coverage=1 00:06:47.428 --rc genhtml_function_coverage=1 00:06:47.428 --rc genhtml_legend=1 00:06:47.428 --rc geninfo_all_blocks=1 00:06:47.428 --rc geninfo_unexecuted_blocks=1 00:06:47.428 00:06:47.428 ' 00:06:47.428 09:39:02 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.428 --rc genhtml_branch_coverage=1 00:06:47.428 --rc genhtml_function_coverage=1 00:06:47.428 --rc genhtml_legend=1 00:06:47.428 --rc geninfo_all_blocks=1 00:06:47.428 --rc geninfo_unexecuted_blocks=1 00:06:47.428 00:06:47.428 ' 00:06:47.428 09:39:02 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.428 --rc genhtml_branch_coverage=1 00:06:47.428 --rc genhtml_function_coverage=1 00:06:47.428 --rc genhtml_legend=1 00:06:47.428 --rc geninfo_all_blocks=1 00:06:47.428 --rc geninfo_unexecuted_blocks=1 00:06:47.428 00:06:47.428 ' 00:06:47.428 09:39:02 version -- app/version.sh@17 -- # get_header_version major 00:06:47.428 09:39:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.428 09:39:02 version -- app/version.sh@14 -- # cut -f2 00:06:47.428 09:39:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.428 09:39:02 version -- app/version.sh@17 -- # major=25 00:06:47.428 09:39:02 version -- app/version.sh@18 -- # get_header_version minor 00:06:47.428 09:39:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.428 09:39:02 version -- app/version.sh@14 -- # cut -f2 00:06:47.428 09:39:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.428 09:39:02 version -- app/version.sh@18 -- # minor=1 00:06:47.428 09:39:02 version -- app/version.sh@19 -- # get_header_version patch 00:06:47.428 09:39:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.428 09:39:02 version -- app/version.sh@14 -- # cut -f2 00:06:47.429 09:39:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.429 09:39:02 version -- app/version.sh@19 -- # patch=0 00:06:47.429 09:39:02 version -- app/version.sh@20 -- # get_header_version suffix 00:06:47.429 09:39:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.429 09:39:02 version -- app/version.sh@14 -- # cut -f2 00:06:47.429 09:39:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.429 09:39:02 version -- app/version.sh@20 -- # suffix=-pre 00:06:47.429 09:39:02 version -- app/version.sh@22 -- # version=25.1 00:06:47.429 09:39:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:47.429 09:39:02 version -- app/version.sh@28 -- # version=25.1rc0 00:06:47.429 09:39:02 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:47.429 09:39:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:47.429 09:39:02 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:47.429 09:39:02 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:47.429 00:06:47.429 real 0m0.278s 00:06:47.429 user 0m0.175s 00:06:47.429 sys 0m0.148s 00:06:47.429 09:39:02 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.429 09:39:02 version -- common/autotest_common.sh@10 -- # set +x 00:06:47.429 ************************************ 00:06:47.429 END TEST version 00:06:47.429 ************************************ 00:06:47.429 09:39:02 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:47.429 09:39:02 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:47.429 09:39:02 -- spdk/autotest.sh@194 -- # uname -s 00:06:47.429 09:39:02 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:47.429 09:39:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:47.429 09:39:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:47.429 09:39:02 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:47.429 09:39:02 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:47.429 09:39:02 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:47.429 09:39:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:47.429 09:39:02 -- common/autotest_common.sh@10 -- # set +x 00:06:47.429 09:39:02 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:47.429 09:39:02 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:47.429 09:39:02 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:47.429 09:39:02 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:47.429 09:39:02 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:47.429 09:39:02 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:47.429 09:39:02 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:47.429 09:39:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:47.429 09:39:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.429 09:39:02 -- common/autotest_common.sh@10 -- # set +x 00:06:47.690 ************************************ 00:06:47.690 START TEST nvmf_tcp 00:06:47.690 ************************************ 00:06:47.690 09:39:02 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:47.690 * Looking for test storage... 00:06:47.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:47.690 09:39:03 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.690 09:39:03 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.690 09:39:03 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.690 09:39:03 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.690 09:39:03 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:47.690 09:39:03 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.690 09:39:03 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.690 --rc genhtml_branch_coverage=1 00:06:47.690 --rc genhtml_function_coverage=1 00:06:47.690 --rc genhtml_legend=1 00:06:47.690 --rc geninfo_all_blocks=1 00:06:47.690 --rc geninfo_unexecuted_blocks=1 00:06:47.690 00:06:47.690 ' 00:06:47.690 09:39:03 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.690 --rc genhtml_branch_coverage=1 00:06:47.690 --rc genhtml_function_coverage=1 00:06:47.690 --rc genhtml_legend=1 00:06:47.690 --rc geninfo_all_blocks=1 00:06:47.690 --rc geninfo_unexecuted_blocks=1 00:06:47.690 00:06:47.690 ' 00:06:47.690 09:39:03 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.690 --rc genhtml_branch_coverage=1 00:06:47.690 --rc genhtml_function_coverage=1 00:06:47.690 --rc genhtml_legend=1 00:06:47.690 --rc geninfo_all_blocks=1 00:06:47.690 --rc geninfo_unexecuted_blocks=1 00:06:47.690 00:06:47.690 ' 00:06:47.690 09:39:03 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.690 --rc genhtml_branch_coverage=1 00:06:47.690 --rc genhtml_function_coverage=1 00:06:47.690 --rc genhtml_legend=1 00:06:47.690 --rc geninfo_all_blocks=1 00:06:47.690 --rc geninfo_unexecuted_blocks=1 00:06:47.690 00:06:47.690 ' 00:06:47.690 09:39:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:47.690 09:39:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:47.690 09:39:03 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:47.691 09:39:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:47.691 09:39:03 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.691 09:39:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.952 ************************************ 00:06:47.952 START TEST nvmf_target_core 00:06:47.952 ************************************ 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:47.952 * Looking for test storage... 00:06:47.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.952 --rc genhtml_branch_coverage=1 00:06:47.952 --rc genhtml_function_coverage=1 00:06:47.952 --rc genhtml_legend=1 00:06:47.952 --rc geninfo_all_blocks=1 00:06:47.952 --rc geninfo_unexecuted_blocks=1 00:06:47.952 00:06:47.952 ' 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.952 --rc genhtml_branch_coverage=1 00:06:47.952 --rc genhtml_function_coverage=1 00:06:47.952 --rc genhtml_legend=1 00:06:47.952 --rc geninfo_all_blocks=1 00:06:47.952 --rc geninfo_unexecuted_blocks=1 00:06:47.952 00:06:47.952 ' 00:06:47.952 09:39:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.952 --rc genhtml_branch_coverage=1 00:06:47.952 --rc genhtml_function_coverage=1 00:06:47.953 --rc genhtml_legend=1 00:06:47.953 --rc geninfo_all_blocks=1 00:06:47.953 --rc geninfo_unexecuted_blocks=1 00:06:47.953 00:06:47.953 ' 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.953 --rc genhtml_branch_coverage=1 00:06:47.953 --rc genhtml_function_coverage=1 00:06:47.953 --rc genhtml_legend=1 00:06:47.953 --rc geninfo_all_blocks=1 00:06:47.953 --rc geninfo_unexecuted_blocks=1 00:06:47.953 00:06:47.953 ' 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:47.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.953 09:39:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:48.215 ************************************ 00:06:48.215 START TEST nvmf_abort 00:06:48.215 ************************************ 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:48.215 * Looking for test storage... 00:06:48.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.215 --rc genhtml_branch_coverage=1 00:06:48.215 --rc genhtml_function_coverage=1 00:06:48.215 --rc genhtml_legend=1 00:06:48.215 --rc geninfo_all_blocks=1 00:06:48.215 --rc geninfo_unexecuted_blocks=1 00:06:48.215 00:06:48.215 ' 00:06:48.215 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.215 --rc genhtml_branch_coverage=1 00:06:48.215 --rc genhtml_function_coverage=1 00:06:48.215 --rc genhtml_legend=1 00:06:48.215 --rc geninfo_all_blocks=1 00:06:48.215 --rc geninfo_unexecuted_blocks=1 00:06:48.215 00:06:48.215 ' 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.216 --rc genhtml_branch_coverage=1 00:06:48.216 --rc genhtml_function_coverage=1 00:06:48.216 --rc genhtml_legend=1 00:06:48.216 --rc geninfo_all_blocks=1 00:06:48.216 --rc geninfo_unexecuted_blocks=1 00:06:48.216 00:06:48.216 ' 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.216 --rc genhtml_branch_coverage=1 00:06:48.216 --rc genhtml_function_coverage=1 00:06:48.216 --rc genhtml_legend=1 00:06:48.216 --rc geninfo_all_blocks=1 00:06:48.216 --rc geninfo_unexecuted_blocks=1 00:06:48.216 00:06:48.216 ' 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.216 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:48.478 09:39:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:56.650 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:56.650 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:56.650 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:56.650 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:56.650 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:56.650 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:56.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:06:56.651 00:06:56.651 --- 10.0.0.2 ping statistics --- 00:06:56.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.651 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:56.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:06:56.651 00:06:56.651 --- 10.0.0.1 ping statistics --- 00:06:56.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.651 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3660911 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3660911 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3660911 ']' 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.651 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.651 [2024-11-27 09:39:11.327343] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:06:56.651 [2024-11-27 09:39:11.327414] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.651 [2024-11-27 09:39:11.427348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.651 [2024-11-27 09:39:11.480638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.651 [2024-11-27 09:39:11.480691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.651 [2024-11-27 09:39:11.480700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.651 [2024-11-27 09:39:11.480707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.651 [2024-11-27 09:39:11.480714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.651 [2024-11-27 09:39:11.482729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.651 [2024-11-27 09:39:11.482891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.651 [2024-11-27 09:39:11.482892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.913 [2024-11-27 09:39:12.189835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.913 Malloc0 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.913 Delay0 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.913 [2024-11-27 09:39:12.280035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.913 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:57.174 [2024-11-27 09:39:12.431720] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:59.092 Initializing NVMe Controllers 00:06:59.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:59.092 controller IO queue size 128 less than required 00:06:59.092 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:59.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:59.092 Initialization complete. Launching workers. 00:06:59.092 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28309 00:06:59.092 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28374, failed to submit 62 00:06:59.092 success 28313, unsuccessful 61, failed 0 00:06:59.092 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:59.092 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.092 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:59.092 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.092 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:59.092 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:59.092 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:59.092 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:59.092 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:59.092 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:59.092 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:59.092 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:59.092 rmmod nvme_tcp 00:06:59.092 rmmod nvme_fabrics 00:06:59.092 rmmod nvme_keyring 00:06:59.092 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3660911 ']' 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3660911 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3660911 ']' 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3660911 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3660911 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3660911' 00:06:59.354 killing process with pid 3660911 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3660911 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3660911 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.354 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.900 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:01.900 00:07:01.900 real 0m13.426s 00:07:01.900 user 0m13.824s 00:07:01.900 sys 0m6.707s 00:07:01.900 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.900 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:01.900 ************************************ 00:07:01.900 END TEST nvmf_abort 00:07:01.900 ************************************ 00:07:01.900 09:39:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:01.900 09:39:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.900 09:39:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.900 09:39:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.900 ************************************ 00:07:01.900 START TEST nvmf_ns_hotplug_stress 00:07:01.900 ************************************ 00:07:01.900 09:39:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:01.900 * Looking for test storage... 00:07:01.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.900 --rc genhtml_branch_coverage=1 00:07:01.900 --rc genhtml_function_coverage=1 00:07:01.900 --rc genhtml_legend=1 00:07:01.900 --rc geninfo_all_blocks=1 00:07:01.900 --rc geninfo_unexecuted_blocks=1 00:07:01.900 00:07:01.900 ' 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.900 --rc genhtml_branch_coverage=1 00:07:01.900 --rc genhtml_function_coverage=1 00:07:01.900 --rc genhtml_legend=1 00:07:01.900 --rc geninfo_all_blocks=1 00:07:01.900 --rc geninfo_unexecuted_blocks=1 00:07:01.900 00:07:01.900 ' 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.900 --rc genhtml_branch_coverage=1 00:07:01.900 --rc genhtml_function_coverage=1 00:07:01.900 --rc genhtml_legend=1 00:07:01.900 --rc geninfo_all_blocks=1 00:07:01.900 --rc geninfo_unexecuted_blocks=1 00:07:01.900 00:07:01.900 ' 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.900 --rc genhtml_branch_coverage=1 00:07:01.900 --rc genhtml_function_coverage=1 00:07:01.900 --rc genhtml_legend=1 00:07:01.900 --rc geninfo_all_blocks=1 00:07:01.900 --rc geninfo_unexecuted_blocks=1 00:07:01.900 00:07:01.900 ' 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:01.900 09:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:10.051 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:10.051 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:10.051 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.051 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:10.052 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:10.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:07:10.052 00:07:10.052 --- 10.0.0.2 ping statistics --- 00:07:10.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.052 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:07:10.052 00:07:10.052 --- 10.0.0.1 ping statistics --- 00:07:10.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.052 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3665912 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3665912 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3665912 ']' 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.052 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:10.052 [2024-11-27 09:39:24.796631] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:07:10.052 [2024-11-27 09:39:24.796705] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.052 [2024-11-27 09:39:24.896720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.052 [2024-11-27 09:39:24.948114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.052 [2024-11-27 09:39:24.948172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.052 [2024-11-27 09:39:24.948183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.052 [2024-11-27 09:39:24.948190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.052 [2024-11-27 09:39:24.948197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.052 [2024-11-27 09:39:24.949980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.052 [2024-11-27 09:39:24.950141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.052 [2024-11-27 09:39:24.950141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.313 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.313 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:10.313 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:10.313 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.313 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:10.314 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.314 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:10.314 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:10.575 [2024-11-27 09:39:25.833824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.575 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:10.836 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.836 [2024-11-27 09:39:26.241030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.836 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:11.096 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:11.357 Malloc0 00:07:11.357 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:11.618 Delay0 00:07:11.618 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.879 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:11.879 NULL1 00:07:11.879 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:12.140 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:12.140 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3666362 00:07:12.140 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:12.140 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.400 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.400 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:12.400 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:12.661 true 00:07:12.661 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:12.661 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.922 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.922 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:12.922 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:13.183 true 00:07:13.183 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:13.183 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.444 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.705 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:13.705 09:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:13.705 true 00:07:13.705 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:13.705 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.964 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.224 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:14.224 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:14.224 true 00:07:14.224 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:14.224 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.484 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.745 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:14.745 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:14.745 true 00:07:14.745 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:14.745 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.006 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.267 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:15.267 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:15.267 true 00:07:15.528 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:15.528 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.528 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.788 09:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:15.788 09:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:15.788 true 00:07:16.050 09:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:16.050 09:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.050 09:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.311 09:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:16.311 09:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:16.571 true 00:07:16.571 09:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:16.571 09:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.571 09:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.832 09:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:16.832 09:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:17.092 true 00:07:17.092 09:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:17.092 09:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.354 09:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.354 09:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:17.354 09:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:17.616 true 00:07:17.616 09:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:17.616 09:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.877 09:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.877 09:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:17.877 09:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:18.138 true 00:07:18.138 09:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:18.138 09:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.399 09:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.399 09:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:18.399 09:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:18.658 true 00:07:18.658 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:18.658 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.919 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.180 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:19.180 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:19.180 true 00:07:19.180 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:19.180 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.441 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.701 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:19.701 09:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:19.701 true 00:07:19.701 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:19.701 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.961 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.220 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:20.220 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:20.220 true 00:07:20.481 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:20.481 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.481 09:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.741 09:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:20.741 09:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:21.002 true 00:07:21.002 09:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:21.002 09:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.002 09:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.262 09:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:21.262 09:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:21.523 true 00:07:21.523 09:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:21.523 09:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.523 09:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.785 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:21.785 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:22.045 true 00:07:22.045 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:22.045 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.306 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.306 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:22.306 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:22.567 true 00:07:22.567 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:22.567 09:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.827 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.827 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:22.828 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:23.088 true 00:07:23.088 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:23.088 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.348 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.608 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:23.608 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:23.608 true 00:07:23.608 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:23.608 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.884 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.144 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:24.144 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:24.144 true 00:07:24.144 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:24.144 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.405 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.666 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:24.666 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:24.666 true 00:07:24.926 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:24.926 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.926 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.188 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:25.188 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:25.448 true 00:07:25.448 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:25.448 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.448 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.709 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:25.709 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:25.969 true 00:07:25.969 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:25.969 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.969 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.231 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:26.231 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:26.492 true 00:07:26.492 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:26.492 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.755 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.755 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:26.755 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:27.015 true 00:07:27.015 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:27.015 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.276 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.276 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:27.276 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:27.536 true 00:07:27.536 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:27.536 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.797 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.797 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:27.797 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:28.058 true 00:07:28.058 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:28.058 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.318 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.578 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:28.578 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:28.578 true 00:07:28.578 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:28.578 09:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.839 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.100 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:29.100 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:29.100 true 00:07:29.100 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:29.100 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.361 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.623 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:29.623 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:29.623 true 00:07:29.883 09:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:29.883 09:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.883 09:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.144 09:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:30.144 09:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:30.404 true 00:07:30.404 09:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:30.404 09:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.404 09:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.665 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:30.665 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:30.925 true 00:07:30.925 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:30.925 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.186 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.186 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:31.186 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:31.447 true 00:07:31.447 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:31.447 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.707 09:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.707 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:31.707 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:31.967 true 00:07:31.967 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:31.967 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.227 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.488 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:32.488 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:32.488 true 00:07:32.488 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:32.488 09:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.748 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.008 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:33.008 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:33.008 true 00:07:33.008 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:33.009 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.269 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.529 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:33.529 09:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:33.529 true 00:07:33.789 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:33.789 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.789 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.049 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:34.049 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:34.308 true 00:07:34.308 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:34.308 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.308 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.569 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:34.569 09:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:34.830 true 00:07:34.830 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:34.830 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.091 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.091 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:35.091 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:35.351 true 00:07:35.351 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:35.351 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.611 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.611 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:35.611 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:35.872 true 00:07:35.872 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:35.872 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.133 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.394 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:36.394 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:36.394 true 00:07:36.394 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:36.394 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.655 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.915 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:36.915 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:36.915 true 00:07:36.915 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:36.915 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.175 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.435 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:37.435 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:37.435 true 00:07:37.435 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:37.435 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.696 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.956 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:37.956 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:37.956 true 00:07:38.217 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:38.217 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.217 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.477 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:38.477 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:38.737 true 00:07:38.737 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:38.737 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.737 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.997 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:38.997 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:39.257 true 00:07:39.257 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:39.257 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.517 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.517 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:39.517 09:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:39.778 true 00:07:39.778 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:39.778 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.038 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.038 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:40.038 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:40.298 true 00:07:40.298 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:40.298 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.558 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.817 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:40.817 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:40.817 true 00:07:40.817 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:40.817 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.077 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.337 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:41.337 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:41.337 true 00:07:41.337 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:41.337 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.597 09:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.857 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:41.857 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:42.117 true 00:07:42.117 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:42.117 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.117 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.377 Initializing NVMe Controllers 00:07:42.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:42.377 Controller IO queue size 128, less than required. 00:07:42.377 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:42.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:42.377 Initialization complete. Launching workers. 00:07:42.377 ======================================================== 00:07:42.377 Latency(us) 00:07:42.377 Device Information : IOPS MiB/s Average min max 00:07:42.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31036.10 15.15 4124.18 1130.98 11046.82 00:07:42.377 ======================================================== 00:07:42.377 Total : 31036.10 15.15 4124.18 1130.98 11046.82 00:07:42.377 00:07:42.377 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:07:42.378 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:07:42.638 true 00:07:42.638 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3666362 00:07:42.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3666362) - No such process 00:07:42.638 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3666362 00:07:42.638 09:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.638 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.897 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:42.897 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:42.897 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:42.897 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:42.897 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:43.157 null0 00:07:43.157 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:43.157 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:43.157 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:43.157 null1 00:07:43.420 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:43.420 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:43.420 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:43.420 null2 00:07:43.420 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:43.420 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:43.420 09:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:43.682 null3 00:07:43.682 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:43.682 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:43.682 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:43.942 null4 00:07:43.942 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:43.942 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:43.942 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:43.942 null5 00:07:43.942 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:43.942 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:43.942 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:44.203 null6 00:07:44.203 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:44.203 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:44.203 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:44.465 null7 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:44.465 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3673002 3673004 3673007 3673010 3673014 3673017 3673019 3673022 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:44.466 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:44.728 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.728 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:44.728 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:44.728 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:44.728 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:44.729 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:44.729 09:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.729 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.989 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:45.250 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:45.520 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.520 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:45.520 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:45.520 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:45.520 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.520 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.520 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:45.521 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:45.521 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:45.521 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.521 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.521 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:45.521 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.521 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.521 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:45.521 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:45.521 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.521 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.521 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.521 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.522 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:45.522 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:45.789 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.789 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.789 09:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:45.789 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.051 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.312 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.313 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:46.313 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:46.313 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.313 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.313 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:46.574 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:46.574 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.574 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.574 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:46.836 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:47.098 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:47.098 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:47.098 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.098 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.099 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.361 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:47.621 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.621 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:47.622 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.622 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:47.622 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:47.622 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.622 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.622 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:47.883 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.146 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.408 rmmod nvme_tcp 00:07:48.408 rmmod nvme_fabrics 00:07:48.408 rmmod nvme_keyring 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3665912 ']' 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3665912 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3665912 ']' 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3665912 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3665912 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3665912' 00:07:48.408 killing process with pid 3665912 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3665912 00:07:48.408 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3665912 00:07:48.673 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:48.673 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:48.673 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:48.673 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:48.673 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:48.673 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:48.673 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:48.673 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.673 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:48.673 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.673 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.673 09:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.587 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:50.587 00:07:50.588 real 0m49.079s 00:07:50.588 user 3m20.220s 00:07:50.588 sys 0m17.508s 00:07:50.588 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.588 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:50.588 ************************************ 00:07:50.588 END TEST nvmf_ns_hotplug_stress 00:07:50.588 ************************************ 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.850 ************************************ 00:07:50.850 START TEST nvmf_delete_subsystem 00:07:50.850 ************************************ 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:50.850 * Looking for test storage... 00:07:50.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.850 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:51.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.113 --rc genhtml_branch_coverage=1 00:07:51.113 --rc genhtml_function_coverage=1 00:07:51.113 --rc genhtml_legend=1 00:07:51.113 --rc geninfo_all_blocks=1 00:07:51.113 --rc geninfo_unexecuted_blocks=1 00:07:51.113 00:07:51.113 ' 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:51.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.113 --rc genhtml_branch_coverage=1 00:07:51.113 --rc genhtml_function_coverage=1 00:07:51.113 --rc genhtml_legend=1 00:07:51.113 --rc geninfo_all_blocks=1 00:07:51.113 --rc geninfo_unexecuted_blocks=1 00:07:51.113 00:07:51.113 ' 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:51.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.113 --rc genhtml_branch_coverage=1 00:07:51.113 --rc genhtml_function_coverage=1 00:07:51.113 --rc genhtml_legend=1 00:07:51.113 --rc geninfo_all_blocks=1 00:07:51.113 --rc geninfo_unexecuted_blocks=1 00:07:51.113 00:07:51.113 ' 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:51.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.113 --rc genhtml_branch_coverage=1 00:07:51.113 --rc genhtml_function_coverage=1 00:07:51.113 --rc genhtml_legend=1 00:07:51.113 --rc geninfo_all_blocks=1 00:07:51.113 --rc geninfo_unexecuted_blocks=1 00:07:51.113 00:07:51.113 ' 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.113 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.114 09:40:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:59.260 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:59.260 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:59.260 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:59.261 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:59.261 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:59.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:07:59.261 00:07:59.261 --- 10.0.0.2 ping statistics --- 00:07:59.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.261 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:07:59.261 00:07:59.261 --- 10.0.0.1 ping statistics --- 00:07:59.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.261 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3678392 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3678392 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3678392 ']' 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.261 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.261 [2024-11-27 09:40:13.923547] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:07:59.261 [2024-11-27 09:40:13.923615] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.261 [2024-11-27 09:40:14.023532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:59.261 [2024-11-27 09:40:14.074149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.261 [2024-11-27 09:40:14.074208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.261 [2024-11-27 09:40:14.074217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.261 [2024-11-27 09:40:14.074224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.261 [2024-11-27 09:40:14.074230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.261 [2024-11-27 09:40:14.075876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.261 [2024-11-27 09:40:14.075879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.523 [2024-11-27 09:40:14.802105] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.523 [2024-11-27 09:40:14.826412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.523 NULL1 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.523 Delay0 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3678459 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:59.523 09:40:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:59.523 [2024-11-27 09:40:14.963349] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:01.440 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:01.440 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.440 09:40:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 [2024-11-27 09:40:17.131563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e91680 is same with the state(6) to be set 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 starting I/O failed: -6 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Write completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 [2024-11-27 09:40:17.134175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f611400d490 is same with the state(6) to be set 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.702 Read completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Write completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:01.703 Read completed with error (sct=0, sc=8) 00:08:02.645 [2024-11-27 09:40:18.104764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e929a0 is same with the state(6) to be set 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 [2024-11-27 09:40:18.136150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e914a0 is same with the state(6) to be set 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 [2024-11-27 09:40:18.136296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e91860 is same with the state(6) to be set 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 [2024-11-27 09:40:18.136882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f611400d020 is same with the state(6) to be set 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Read completed with error (sct=0, sc=8) 00:08:02.906 Write completed with error (sct=0, sc=8) 00:08:02.906 [2024-11-27 09:40:18.136943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f611400d7c0 is same with the state(6) to be set 00:08:02.906 Initializing NVMe Controllers 00:08:02.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:02.906 Controller IO queue size 128, less than required. 00:08:02.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:02.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:02.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:02.906 Initialization complete. Launching workers. 00:08:02.906 ======================================================== 00:08:02.906 Latency(us) 00:08:02.906 Device Information : IOPS MiB/s Average min max 00:08:02.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.04 0.09 880618.42 490.99 1009271.81 00:08:02.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.69 0.07 945580.11 290.84 1011991.68 00:08:02.906 ======================================================== 00:08:02.906 Total : 326.73 0.16 910380.17 290.84 1011991.68 00:08:02.906 00:08:02.906 [2024-11-27 09:40:18.137367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e929a0 (9): Bad file descriptor 00:08:02.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:02.906 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.906 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:02.906 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3678459 00:08:02.906 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3678459 00:08:03.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3678459) - No such process 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3678459 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3678459 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3678459 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.556 [2024-11-27 09:40:18.666631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3679250 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3679250 00:08:03.556 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:03.556 [2024-11-27 09:40:18.765812] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:03.885 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:03.885 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3679250 00:08:03.885 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:04.525 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:04.525 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3679250 00:08:04.525 09:40:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:04.786 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:04.786 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3679250 00:08:04.786 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:05.358 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:05.358 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3679250 00:08:05.358 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:05.931 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:05.931 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3679250 00:08:05.931 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:06.504 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.504 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3679250 00:08:06.504 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:06.504 Initializing NVMe Controllers 00:08:06.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:06.504 Controller IO queue size 128, less than required. 00:08:06.504 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:06.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:06.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:06.504 Initialization complete. Launching workers. 00:08:06.504 ======================================================== 00:08:06.504 Latency(us) 00:08:06.504 Device Information : IOPS MiB/s Average min max 00:08:06.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001865.23 1000246.59 1004814.17 00:08:06.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002952.55 1000324.08 1007438.62 00:08:06.504 ======================================================== 00:08:06.504 Total : 256.00 0.12 1002408.89 1000246.59 1007438.62 00:08:06.504 00:08:06.504 [2024-11-27 09:40:21.856775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40a70 is same with the state(6) to be set 00:08:06.765 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.765 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3679250 00:08:06.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3679250) - No such process 00:08:06.765 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3679250 00:08:06.765 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:06.765 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:06.765 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.765 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:06.765 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.765 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:06.765 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.765 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:07.026 rmmod nvme_tcp 00:08:07.026 rmmod nvme_fabrics 00:08:07.026 rmmod nvme_keyring 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3678392 ']' 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3678392 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3678392 ']' 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3678392 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3678392 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3678392' 00:08:07.026 killing process with pid 3678392 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3678392 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3678392 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.026 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:09.594 00:08:09.594 real 0m18.419s 00:08:09.594 user 0m30.964s 00:08:09.594 sys 0m6.745s 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.594 ************************************ 00:08:09.594 END TEST nvmf_delete_subsystem 00:08:09.594 ************************************ 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.594 ************************************ 00:08:09.594 START TEST nvmf_host_management 00:08:09.594 ************************************ 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:09.594 * Looking for test storage... 00:08:09.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:09.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.594 --rc genhtml_branch_coverage=1 00:08:09.594 --rc genhtml_function_coverage=1 00:08:09.594 --rc genhtml_legend=1 00:08:09.594 --rc geninfo_all_blocks=1 00:08:09.594 --rc geninfo_unexecuted_blocks=1 00:08:09.594 00:08:09.594 ' 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:09.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.594 --rc genhtml_branch_coverage=1 00:08:09.594 --rc genhtml_function_coverage=1 00:08:09.594 --rc genhtml_legend=1 00:08:09.594 --rc geninfo_all_blocks=1 00:08:09.594 --rc geninfo_unexecuted_blocks=1 00:08:09.594 00:08:09.594 ' 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:09.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.594 --rc genhtml_branch_coverage=1 00:08:09.594 --rc genhtml_function_coverage=1 00:08:09.594 --rc genhtml_legend=1 00:08:09.594 --rc geninfo_all_blocks=1 00:08:09.594 --rc geninfo_unexecuted_blocks=1 00:08:09.594 00:08:09.594 ' 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:09.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.594 --rc genhtml_branch_coverage=1 00:08:09.594 --rc genhtml_function_coverage=1 00:08:09.594 --rc genhtml_legend=1 00:08:09.594 --rc geninfo_all_blocks=1 00:08:09.594 --rc geninfo_unexecuted_blocks=1 00:08:09.594 00:08:09.594 ' 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.594 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:09.595 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:17.737 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:17.737 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:17.737 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:17.737 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.737 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.737 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.737 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.737 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:17.737 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.737 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.737 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.737 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:17.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.716 ms 00:08:17.738 00:08:17.738 --- 10.0.0.2 ping statistics --- 00:08:17.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.738 rtt min/avg/max/mdev = 0.716/0.716/0.716/0.000 ms 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:08:17.738 00:08:17.738 --- 10.0.0.1 ping statistics --- 00:08:17.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.738 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3684201 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3684201 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3684201 ']' 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.738 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.738 [2024-11-27 09:40:32.378686] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:08:17.738 [2024-11-27 09:40:32.378767] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.738 [2024-11-27 09:40:32.481772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.738 [2024-11-27 09:40:32.537013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.738 [2024-11-27 09:40:32.537069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.738 [2024-11-27 09:40:32.537078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.738 [2024-11-27 09:40:32.537085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.738 [2024-11-27 09:40:32.537095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.738 [2024-11-27 09:40:32.539421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.738 [2024-11-27 09:40:32.539566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.738 [2024-11-27 09:40:32.539725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:17.738 [2024-11-27 09:40:32.539729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.738 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.738 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:17.738 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:17.738 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.738 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.999 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.999 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.999 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.999 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.999 [2024-11-27 09:40:33.240104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.999 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.999 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:17.999 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.999 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.999 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.000 Malloc0 00:08:18.000 [2024-11-27 09:40:33.317878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3684550 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3684550 /var/tmp/bdevperf.sock 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3684550 ']' 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:18.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:18.000 { 00:08:18.000 "params": { 00:08:18.000 "name": "Nvme$subsystem", 00:08:18.000 "trtype": "$TEST_TRANSPORT", 00:08:18.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.000 "adrfam": "ipv4", 00:08:18.000 "trsvcid": "$NVMF_PORT", 00:08:18.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.000 "hdgst": ${hdgst:-false}, 00:08:18.000 "ddgst": ${ddgst:-false} 00:08:18.000 }, 00:08:18.000 "method": "bdev_nvme_attach_controller" 00:08:18.000 } 00:08:18.000 EOF 00:08:18.000 )") 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:18.000 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:18.000 "params": { 00:08:18.000 "name": "Nvme0", 00:08:18.000 "trtype": "tcp", 00:08:18.000 "traddr": "10.0.0.2", 00:08:18.000 "adrfam": "ipv4", 00:08:18.000 "trsvcid": "4420", 00:08:18.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:18.000 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:18.000 "hdgst": false, 00:08:18.000 "ddgst": false 00:08:18.000 }, 00:08:18.000 "method": "bdev_nvme_attach_controller" 00:08:18.000 }' 00:08:18.000 [2024-11-27 09:40:33.426650] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:08:18.000 [2024-11-27 09:40:33.426715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684550 ] 00:08:18.261 [2024-11-27 09:40:33.519234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.261 [2024-11-27 09:40:33.572057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.522 Running I/O for 10 seconds... 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.097 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.097 [2024-11-27 09:40:34.329582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.329995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.330002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.330011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.330018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.330025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.330031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.097 [2024-11-27 09:40:34.330044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45130 is same with the state(6) to be set 00:08:19.098 [2024-11-27 09:40:34.330297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.098 [2024-11-27 09:40:34.330884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.098 [2024-11-27 09:40:34.330895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.330902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.330912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.330920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.330930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.330938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.330948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.330957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.330967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.330974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.330984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.330992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.099 [2024-11-27 09:40:34.331529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.099 [2024-11-27 09:40:34.331538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125b210 is same with the state(6) to be set 00:08:19.099 [2024-11-27 09:40:34.332852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:19.099 task offset: 106496 on job bdev=Nvme0n1 fails 00:08:19.099 00:08:19.099 Latency(us) 00:08:19.099 [2024-11-27T08:40:34.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.099 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:19.099 Job: Nvme0n1 ended in about 0.55 seconds with error 00:08:19.099 Verification LBA range: start 0x0 length 0x400 00:08:19.099 Nvme0n1 : 0.55 1524.46 95.28 117.27 0.00 37986.27 5625.17 36700.16 00:08:19.099 [2024-11-27T08:40:34.565Z] =================================================================================================================== 00:08:19.099 [2024-11-27T08:40:34.565Z] Total : 1524.46 95.28 117.27 0.00 37986.27 5625.17 36700.16 00:08:19.099 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.100 [2024-11-27 09:40:34.335128] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.100 [2024-11-27 09:40:34.335178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1042000 (9): Bad file descriptor 00:08:19.100 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:19.100 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.100 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.100 [2024-11-27 09:40:34.341184] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:08:19.100 [2024-11-27 09:40:34.341294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:08:19.100 [2024-11-27 09:40:34.341322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-27 09:40:34.341341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:08:19.100 [2024-11-27 09:40:34.341350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:08:19.100 [2024-11-27 09:40:34.341358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:08:19.100 [2024-11-27 09:40:34.341366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1042000 00:08:19.100 [2024-11-27 09:40:34.341388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1042000 (9): Bad file descriptor 00:08:19.100 [2024-11-27 09:40:34.341402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:08:19.100 [2024-11-27 09:40:34.341410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:08:19.100 [2024-11-27 09:40:34.341421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:08:19.100 [2024-11-27 09:40:34.341432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:08:19.100 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.100 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:20.040 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3684550 00:08:20.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3684550) - No such process 00:08:20.040 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:20.040 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:20.040 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:20.040 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:20.040 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:20.040 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:20.040 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:20.040 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:20.040 { 00:08:20.040 "params": { 00:08:20.040 "name": "Nvme$subsystem", 00:08:20.040 "trtype": "$TEST_TRANSPORT", 00:08:20.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:20.040 "adrfam": "ipv4", 00:08:20.040 "trsvcid": "$NVMF_PORT", 00:08:20.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:20.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:20.040 "hdgst": ${hdgst:-false}, 00:08:20.040 "ddgst": ${ddgst:-false} 00:08:20.040 }, 00:08:20.040 "method": "bdev_nvme_attach_controller" 00:08:20.040 } 00:08:20.040 EOF 00:08:20.040 )") 00:08:20.040 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:20.040 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:20.040 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:20.040 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:20.040 "params": { 00:08:20.040 "name": "Nvme0", 00:08:20.040 "trtype": "tcp", 00:08:20.040 "traddr": "10.0.0.2", 00:08:20.040 "adrfam": "ipv4", 00:08:20.040 "trsvcid": "4420", 00:08:20.040 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:20.040 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:20.040 "hdgst": false, 00:08:20.040 "ddgst": false 00:08:20.040 }, 00:08:20.040 "method": "bdev_nvme_attach_controller" 00:08:20.040 }' 00:08:20.040 [2024-11-27 09:40:35.407835] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:08:20.040 [2024-11-27 09:40:35.407892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684909 ] 00:08:20.040 [2024-11-27 09:40:35.495479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.301 [2024-11-27 09:40:35.531075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.301 Running I/O for 1 seconds... 00:08:21.241 1600.00 IOPS, 100.00 MiB/s 00:08:21.241 Latency(us) 00:08:21.241 [2024-11-27T08:40:36.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.241 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:21.241 Verification LBA range: start 0x0 length 0x400 00:08:21.241 Nvme0n1 : 1.02 1630.75 101.92 0.00 0.00 38572.39 7318.19 32112.64 00:08:21.241 [2024-11-27T08:40:36.707Z] =================================================================================================================== 00:08:21.241 [2024-11-27T08:40:36.707Z] Total : 1630.75 101.92 0.00 0.00 38572.39 7318.19 32112.64 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:21.500 rmmod nvme_tcp 00:08:21.500 rmmod nvme_fabrics 00:08:21.500 rmmod nvme_keyring 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3684201 ']' 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3684201 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3684201 ']' 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3684201 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3684201 00:08:21.500 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:21.760 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:21.760 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3684201' 00:08:21.760 killing process with pid 3684201 00:08:21.760 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3684201 00:08:21.760 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3684201 00:08:21.760 [2024-11-27 09:40:37.057062] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:21.760 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:21.760 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:21.760 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:21.760 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:21.760 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:21.760 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:21.760 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:21.760 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.760 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:21.760 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.760 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.760 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:24.302 00:08:24.302 real 0m14.547s 00:08:24.302 user 0m22.937s 00:08:24.302 sys 0m6.619s 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.302 ************************************ 00:08:24.302 END TEST nvmf_host_management 00:08:24.302 ************************************ 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.302 ************************************ 00:08:24.302 START TEST nvmf_lvol 00:08:24.302 ************************************ 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:24.302 * Looking for test storage... 00:08:24.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:24.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.302 --rc genhtml_branch_coverage=1 00:08:24.302 --rc genhtml_function_coverage=1 00:08:24.302 --rc genhtml_legend=1 00:08:24.302 --rc geninfo_all_blocks=1 00:08:24.302 --rc geninfo_unexecuted_blocks=1 00:08:24.302 00:08:24.302 ' 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:24.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.302 --rc genhtml_branch_coverage=1 00:08:24.302 --rc genhtml_function_coverage=1 00:08:24.302 --rc genhtml_legend=1 00:08:24.302 --rc geninfo_all_blocks=1 00:08:24.302 --rc geninfo_unexecuted_blocks=1 00:08:24.302 00:08:24.302 ' 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:24.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.302 --rc genhtml_branch_coverage=1 00:08:24.302 --rc genhtml_function_coverage=1 00:08:24.302 --rc genhtml_legend=1 00:08:24.302 --rc geninfo_all_blocks=1 00:08:24.302 --rc geninfo_unexecuted_blocks=1 00:08:24.302 00:08:24.302 ' 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:24.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.302 --rc genhtml_branch_coverage=1 00:08:24.302 --rc genhtml_function_coverage=1 00:08:24.302 --rc genhtml_legend=1 00:08:24.302 --rc geninfo_all_blocks=1 00:08:24.302 --rc geninfo_unexecuted_blocks=1 00:08:24.302 00:08:24.302 ' 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.302 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:24.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:24.303 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:32.456 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:32.456 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:32.456 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:32.456 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:32.456 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:32.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:08:32.457 00:08:32.457 --- 10.0.0.2 ping statistics --- 00:08:32.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.457 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:08:32.457 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:08:32.457 00:08:32.457 --- 10.0.0.1 ping statistics --- 00:08:32.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.457 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3689586 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3689586 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3689586 ']' 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.457 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.457 [2024-11-27 09:40:47.115482] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:08:32.457 [2024-11-27 09:40:47.115549] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.457 [2024-11-27 09:40:47.214859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.457 [2024-11-27 09:40:47.266701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.457 [2024-11-27 09:40:47.266755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.457 [2024-11-27 09:40:47.266764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.457 [2024-11-27 09:40:47.266771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.457 [2024-11-27 09:40:47.266777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.457 [2024-11-27 09:40:47.268658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.457 [2024-11-27 09:40:47.268815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.457 [2024-11-27 09:40:47.268817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.720 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.720 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:32.720 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:32.720 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:32.720 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.720 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.720 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:32.720 [2024-11-27 09:40:48.152585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.981 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:32.981 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:32.981 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:33.243 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:33.243 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:33.505 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:33.766 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cc3a8737-2e65-4345-87d1-05620465b59e 00:08:33.766 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cc3a8737-2e65-4345-87d1-05620465b59e lvol 20 00:08:34.026 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fdb2c678-2147-4254-8f15-7c528d2d5f6e 00:08:34.026 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.026 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fdb2c678-2147-4254-8f15-7c528d2d5f6e 00:08:34.287 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:34.547 [2024-11-27 09:40:49.817016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.547 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.809 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3690155 00:08:34.809 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:34.809 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:35.753 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fdb2c678-2147-4254-8f15-7c528d2d5f6e MY_SNAPSHOT 00:08:36.015 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=84c7f729-527f-4c74-adfa-35d3c696eaed 00:08:36.015 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fdb2c678-2147-4254-8f15-7c528d2d5f6e 30 00:08:36.015 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 84c7f729-527f-4c74-adfa-35d3c696eaed MY_CLONE 00:08:36.275 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c7789cca-f2f1-40bf-9c37-84185e800fa3 00:08:36.275 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c7789cca-f2f1-40bf-9c37-84185e800fa3 00:08:36.846 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3690155 00:08:44.991 Initializing NVMe Controllers 00:08:44.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:44.991 Controller IO queue size 128, less than required. 00:08:44.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:44.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:44.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:44.991 Initialization complete. Launching workers. 00:08:44.991 ======================================================== 00:08:44.991 Latency(us) 00:08:44.991 Device Information : IOPS MiB/s Average min max 00:08:44.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15786.30 61.67 8109.04 1551.24 46945.44 00:08:44.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17165.30 67.05 7458.07 1054.75 61942.93 00:08:44.991 ======================================================== 00:08:44.991 Total : 32951.60 128.72 7769.94 1054.75 61942.93 00:08:44.991 00:08:44.991 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.252 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fdb2c678-2147-4254-8f15-7c528d2d5f6e 00:08:45.512 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cc3a8737-2e65-4345-87d1-05620465b59e 00:08:45.512 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:45.512 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:45.773 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:45.773 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:45.773 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:45.773 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:45.773 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:45.773 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:45.773 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:45.773 rmmod nvme_tcp 00:08:45.773 rmmod nvme_fabrics 00:08:45.773 rmmod nvme_keyring 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3689586 ']' 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3689586 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3689586 ']' 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3689586 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3689586 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3689586' 00:08:45.773 killing process with pid 3689586 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3689586 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3689586 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:45.773 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:46.033 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:46.033 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:46.033 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:46.033 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:46.033 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:46.033 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.033 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.033 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.944 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.944 00:08:47.944 real 0m24.074s 00:08:47.944 user 1m5.127s 00:08:47.944 sys 0m8.656s 00:08:47.944 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.944 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:47.944 ************************************ 00:08:47.944 END TEST nvmf_lvol 00:08:47.944 ************************************ 00:08:47.944 09:41:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:47.944 09:41:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.944 09:41:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.944 09:41:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.944 ************************************ 00:08:47.944 START TEST nvmf_lvs_grow 00:08:47.944 ************************************ 00:08:47.944 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:48.205 * Looking for test storage... 00:08:48.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:48.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.205 --rc genhtml_branch_coverage=1 00:08:48.205 --rc genhtml_function_coverage=1 00:08:48.205 --rc genhtml_legend=1 00:08:48.205 --rc geninfo_all_blocks=1 00:08:48.205 --rc geninfo_unexecuted_blocks=1 00:08:48.205 00:08:48.205 ' 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:48.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.205 --rc genhtml_branch_coverage=1 00:08:48.205 --rc genhtml_function_coverage=1 00:08:48.205 --rc genhtml_legend=1 00:08:48.205 --rc geninfo_all_blocks=1 00:08:48.205 --rc geninfo_unexecuted_blocks=1 00:08:48.205 00:08:48.205 ' 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:48.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.205 --rc genhtml_branch_coverage=1 00:08:48.205 --rc genhtml_function_coverage=1 00:08:48.205 --rc genhtml_legend=1 00:08:48.205 --rc geninfo_all_blocks=1 00:08:48.205 --rc geninfo_unexecuted_blocks=1 00:08:48.205 00:08:48.205 ' 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:48.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.205 --rc genhtml_branch_coverage=1 00:08:48.205 --rc genhtml_function_coverage=1 00:08:48.205 --rc genhtml_legend=1 00:08:48.205 --rc geninfo_all_blocks=1 00:08:48.205 --rc geninfo_unexecuted_blocks=1 00:08:48.205 00:08:48.205 ' 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.205 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:48.206 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:56.429 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:56.429 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:56.429 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:56.429 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:56.429 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.430 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.430 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.430 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.430 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:56.430 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:56.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:08:56.430 00:08:56.430 --- 10.0.0.2 ping statistics --- 00:08:56.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.430 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:08:56.430 00:08:56.430 --- 10.0.0.1 ping statistics --- 00:08:56.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.430 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3696665 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3696665 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3696665 ']' 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.430 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.430 [2024-11-27 09:41:11.159826] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:08:56.430 [2024-11-27 09:41:11.159899] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.430 [2024-11-27 09:41:11.242255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.430 [2024-11-27 09:41:11.293420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.430 [2024-11-27 09:41:11.293471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.430 [2024-11-27 09:41:11.293479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.430 [2024-11-27 09:41:11.293486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.430 [2024-11-27 09:41:11.293492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.430 [2024-11-27 09:41:11.294313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.697 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.697 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:56.697 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:56.697 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.697 09:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.697 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.697 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:56.957 [2024-11-27 09:41:12.196710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.957 ************************************ 00:08:56.957 START TEST lvs_grow_clean 00:08:56.957 ************************************ 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:56.957 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:57.217 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:57.217 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:57.218 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8caebeb1-56aa-4a8a-897f-95317734c450 00:08:57.218 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:57.218 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8caebeb1-56aa-4a8a-897f-95317734c450 00:08:57.480 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:57.480 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:57.480 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8caebeb1-56aa-4a8a-897f-95317734c450 lvol 150 00:08:57.740 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b4b35fbb-177f-4768-838f-d5d61a7367ff 00:08:57.740 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.740 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:58.001 [2024-11-27 09:41:13.220991] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:58.001 [2024-11-27 09:41:13.221062] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:58.001 true 00:08:58.001 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8caebeb1-56aa-4a8a-897f-95317734c450 00:08:58.001 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:58.001 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:58.001 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:58.262 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b4b35fbb-177f-4768-838f-d5d61a7367ff 00:08:58.522 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:58.522 [2024-11-27 09:41:13.923283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.523 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:58.784 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3697230 00:08:58.784 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:58.784 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3697230 /var/tmp/bdevperf.sock 00:08:58.785 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:58.785 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3697230 ']' 00:08:58.785 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:58.785 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.785 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:58.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:58.785 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.785 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:58.785 [2024-11-27 09:41:14.163499] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:08:58.785 [2024-11-27 09:41:14.163571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3697230 ] 00:08:59.045 [2024-11-27 09:41:14.256072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.045 [2024-11-27 09:41:14.308271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.618 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.618 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:59.618 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:00.189 Nvme0n1 00:09:00.189 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:00.189 [ 00:09:00.189 { 00:09:00.189 "name": "Nvme0n1", 00:09:00.189 "aliases": [ 00:09:00.189 "b4b35fbb-177f-4768-838f-d5d61a7367ff" 00:09:00.189 ], 00:09:00.189 "product_name": "NVMe disk", 00:09:00.189 "block_size": 4096, 00:09:00.189 "num_blocks": 38912, 00:09:00.189 "uuid": "b4b35fbb-177f-4768-838f-d5d61a7367ff", 00:09:00.189 "numa_id": 0, 00:09:00.189 "assigned_rate_limits": { 00:09:00.189 "rw_ios_per_sec": 0, 00:09:00.189 "rw_mbytes_per_sec": 0, 00:09:00.189 "r_mbytes_per_sec": 0, 00:09:00.189 "w_mbytes_per_sec": 0 00:09:00.189 }, 00:09:00.189 "claimed": false, 00:09:00.189 "zoned": false, 00:09:00.189 "supported_io_types": { 00:09:00.189 "read": true, 00:09:00.189 "write": true, 00:09:00.189 "unmap": true, 00:09:00.189 "flush": true, 00:09:00.189 "reset": true, 00:09:00.189 "nvme_admin": true, 00:09:00.189 "nvme_io": true, 00:09:00.189 "nvme_io_md": false, 00:09:00.189 "write_zeroes": true, 00:09:00.189 "zcopy": false, 00:09:00.189 "get_zone_info": false, 00:09:00.189 "zone_management": false, 00:09:00.189 "zone_append": false, 00:09:00.189 "compare": true, 00:09:00.189 "compare_and_write": true, 00:09:00.189 "abort": true, 00:09:00.189 "seek_hole": false, 00:09:00.189 "seek_data": false, 00:09:00.189 "copy": true, 00:09:00.189 "nvme_iov_md": false 00:09:00.189 }, 00:09:00.189 "memory_domains": [ 00:09:00.189 { 00:09:00.189 "dma_device_id": "system", 00:09:00.189 "dma_device_type": 1 00:09:00.189 } 00:09:00.189 ], 00:09:00.189 "driver_specific": { 00:09:00.189 "nvme": [ 00:09:00.189 { 00:09:00.189 "trid": { 00:09:00.189 "trtype": "TCP", 00:09:00.189 "adrfam": "IPv4", 00:09:00.189 "traddr": "10.0.0.2", 00:09:00.189 "trsvcid": "4420", 00:09:00.189 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:00.189 }, 00:09:00.189 "ctrlr_data": { 00:09:00.189 "cntlid": 1, 00:09:00.189 "vendor_id": "0x8086", 00:09:00.189 "model_number": "SPDK bdev Controller", 00:09:00.189 "serial_number": "SPDK0", 00:09:00.189 "firmware_revision": "25.01", 00:09:00.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:00.189 "oacs": { 00:09:00.189 "security": 0, 00:09:00.189 "format": 0, 00:09:00.189 "firmware": 0, 00:09:00.189 "ns_manage": 0 00:09:00.189 }, 00:09:00.189 "multi_ctrlr": true, 00:09:00.189 "ana_reporting": false 00:09:00.189 }, 00:09:00.189 "vs": { 00:09:00.189 "nvme_version": "1.3" 00:09:00.189 }, 00:09:00.189 "ns_data": { 00:09:00.189 "id": 1, 00:09:00.189 "can_share": true 00:09:00.189 } 00:09:00.189 } 00:09:00.189 ], 00:09:00.189 "mp_policy": "active_passive" 00:09:00.189 } 00:09:00.189 } 00:09:00.189 ] 00:09:00.189 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3697444 00:09:00.189 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:00.189 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:00.449 Running I/O for 10 seconds... 00:09:01.389 Latency(us) 00:09:01.389 [2024-11-27T08:41:16.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.389 Nvme0n1 : 1.00 23749.00 92.77 0.00 0.00 0.00 0.00 0.00 00:09:01.389 [2024-11-27T08:41:16.855Z] =================================================================================================================== 00:09:01.389 [2024-11-27T08:41:16.855Z] Total : 23749.00 92.77 0.00 0.00 0.00 0.00 0.00 00:09:01.389 00:09:02.328 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8caebeb1-56aa-4a8a-897f-95317734c450 00:09:02.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.328 Nvme0n1 : 2.00 23946.50 93.54 0.00 0.00 0.00 0.00 0.00 00:09:02.328 [2024-11-27T08:41:17.794Z] =================================================================================================================== 00:09:02.328 [2024-11-27T08:41:17.794Z] Total : 23946.50 93.54 0.00 0.00 0.00 0.00 0.00 00:09:02.328 00:09:02.328 true 00:09:02.328 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8caebeb1-56aa-4a8a-897f-95317734c450 00:09:02.328 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:02.588 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:02.588 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:02.588 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3697444 00:09:03.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.529 Nvme0n1 : 3.00 24039.00 93.90 0.00 0.00 0.00 0.00 0.00 00:09:03.529 [2024-11-27T08:41:18.995Z] =================================================================================================================== 00:09:03.529 [2024-11-27T08:41:18.995Z] Total : 24039.00 93.90 0.00 0.00 0.00 0.00 0.00 00:09:03.529 00:09:04.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.469 Nvme0n1 : 4.00 24103.25 94.15 0.00 0.00 0.00 0.00 0.00 00:09:04.469 [2024-11-27T08:41:19.935Z] =================================================================================================================== 00:09:04.469 [2024-11-27T08:41:19.935Z] Total : 24103.25 94.15 0.00 0.00 0.00 0.00 0.00 00:09:04.469 00:09:05.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.411 Nvme0n1 : 5.00 24146.60 94.32 0.00 0.00 0.00 0.00 0.00 00:09:05.411 [2024-11-27T08:41:20.877Z] =================================================================================================================== 00:09:05.411 [2024-11-27T08:41:20.877Z] Total : 24146.60 94.32 0.00 0.00 0.00 0.00 0.00 00:09:05.411 00:09:06.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.352 Nvme0n1 : 6.00 24184.83 94.47 0.00 0.00 0.00 0.00 0.00 00:09:06.352 [2024-11-27T08:41:21.818Z] =================================================================================================================== 00:09:06.352 [2024-11-27T08:41:21.818Z] Total : 24184.83 94.47 0.00 0.00 0.00 0.00 0.00 00:09:06.353 00:09:07.294 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.294 Nvme0n1 : 7.00 24216.71 94.60 0.00 0.00 0.00 0.00 0.00 00:09:07.294 [2024-11-27T08:41:22.760Z] =================================================================================================================== 00:09:07.294 [2024-11-27T08:41:22.760Z] Total : 24216.71 94.60 0.00 0.00 0.00 0.00 0.00 00:09:07.294 00:09:08.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.236 Nvme0n1 : 8.00 24243.62 94.70 0.00 0.00 0.00 0.00 0.00 00:09:08.236 [2024-11-27T08:41:23.702Z] =================================================================================================================== 00:09:08.236 [2024-11-27T08:41:23.702Z] Total : 24243.62 94.70 0.00 0.00 0.00 0.00 0.00 00:09:08.236 00:09:09.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.621 Nvme0n1 : 9.00 24263.67 94.78 0.00 0.00 0.00 0.00 0.00 00:09:09.621 [2024-11-27T08:41:25.087Z] =================================================================================================================== 00:09:09.621 [2024-11-27T08:41:25.088Z] Total : 24263.67 94.78 0.00 0.00 0.00 0.00 0.00 00:09:09.622 00:09:10.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.563 Nvme0n1 : 10.00 24281.30 94.85 0.00 0.00 0.00 0.00 0.00 00:09:10.563 [2024-11-27T08:41:26.029Z] =================================================================================================================== 00:09:10.563 [2024-11-27T08:41:26.029Z] Total : 24281.30 94.85 0.00 0.00 0.00 0.00 0.00 00:09:10.563 00:09:10.563 00:09:10.563 Latency(us) 00:09:10.563 [2024-11-27T08:41:26.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.563 Nvme0n1 : 10.01 24281.65 94.85 0.00 0.00 5267.45 3549.87 15400.96 00:09:10.563 [2024-11-27T08:41:26.029Z] =================================================================================================================== 00:09:10.563 [2024-11-27T08:41:26.029Z] Total : 24281.65 94.85 0.00 0.00 5267.45 3549.87 15400.96 00:09:10.563 { 00:09:10.563 "results": [ 00:09:10.563 { 00:09:10.563 "job": "Nvme0n1", 00:09:10.563 "core_mask": "0x2", 00:09:10.563 "workload": "randwrite", 00:09:10.563 "status": "finished", 00:09:10.563 "queue_depth": 128, 00:09:10.563 "io_size": 4096, 00:09:10.563 "runtime": 10.005127, 00:09:10.563 "iops": 24281.65079763605, 00:09:10.563 "mibps": 94.85019842826583, 00:09:10.563 "io_failed": 0, 00:09:10.563 "io_timeout": 0, 00:09:10.563 "avg_latency_us": 5267.452470627299, 00:09:10.563 "min_latency_us": 3549.866666666667, 00:09:10.563 "max_latency_us": 15400.96 00:09:10.563 } 00:09:10.563 ], 00:09:10.563 "core_count": 1 00:09:10.563 } 00:09:10.563 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3697230 00:09:10.563 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3697230 ']' 00:09:10.563 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3697230 00:09:10.563 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:10.563 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.563 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3697230 00:09:10.563 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:10.563 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:10.563 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3697230' 00:09:10.563 killing process with pid 3697230 00:09:10.563 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3697230 00:09:10.563 Received shutdown signal, test time was about 10.000000 seconds 00:09:10.563 00:09:10.563 Latency(us) 00:09:10.563 [2024-11-27T08:41:26.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.563 [2024-11-27T08:41:26.029Z] =================================================================================================================== 00:09:10.563 [2024-11-27T08:41:26.029Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:10.563 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3697230 00:09:10.563 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:10.824 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:10.824 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8caebeb1-56aa-4a8a-897f-95317734c450 00:09:10.824 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:11.084 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:11.084 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:11.084 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:11.344 [2024-11-27 09:41:26.611750] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:11.344 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8caebeb1-56aa-4a8a-897f-95317734c450 00:09:11.344 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:11.344 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8caebeb1-56aa-4a8a-897f-95317734c450 00:09:11.344 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.344 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.345 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.345 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.345 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.345 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.345 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.345 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:11.345 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8caebeb1-56aa-4a8a-897f-95317734c450 00:09:11.605 request: 00:09:11.605 { 00:09:11.605 "uuid": "8caebeb1-56aa-4a8a-897f-95317734c450", 00:09:11.605 "method": "bdev_lvol_get_lvstores", 00:09:11.605 "req_id": 1 00:09:11.605 } 00:09:11.605 Got JSON-RPC error response 00:09:11.605 response: 00:09:11.605 { 00:09:11.605 "code": -19, 00:09:11.605 "message": "No such device" 00:09:11.605 } 00:09:11.605 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:11.605 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:11.605 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:11.605 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:11.605 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:11.605 aio_bdev 00:09:11.605 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b4b35fbb-177f-4768-838f-d5d61a7367ff 00:09:11.605 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b4b35fbb-177f-4768-838f-d5d61a7367ff 00:09:11.605 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.605 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:11.605 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.605 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.605 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:11.865 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b4b35fbb-177f-4768-838f-d5d61a7367ff -t 2000 00:09:11.865 [ 00:09:11.865 { 00:09:11.865 "name": "b4b35fbb-177f-4768-838f-d5d61a7367ff", 00:09:11.865 "aliases": [ 00:09:11.865 "lvs/lvol" 00:09:11.865 ], 00:09:11.865 "product_name": "Logical Volume", 00:09:11.866 "block_size": 4096, 00:09:11.866 "num_blocks": 38912, 00:09:11.866 "uuid": "b4b35fbb-177f-4768-838f-d5d61a7367ff", 00:09:11.866 "assigned_rate_limits": { 00:09:11.866 "rw_ios_per_sec": 0, 00:09:11.866 "rw_mbytes_per_sec": 0, 00:09:11.866 "r_mbytes_per_sec": 0, 00:09:11.866 "w_mbytes_per_sec": 0 00:09:11.866 }, 00:09:11.866 "claimed": false, 00:09:11.866 "zoned": false, 00:09:11.866 "supported_io_types": { 00:09:11.866 "read": true, 00:09:11.866 "write": true, 00:09:11.866 "unmap": true, 00:09:11.866 "flush": false, 00:09:11.866 "reset": true, 00:09:11.866 "nvme_admin": false, 00:09:11.866 "nvme_io": false, 00:09:11.866 "nvme_io_md": false, 00:09:11.866 "write_zeroes": true, 00:09:11.866 "zcopy": false, 00:09:11.866 "get_zone_info": false, 00:09:11.866 "zone_management": false, 00:09:11.866 "zone_append": false, 00:09:11.866 "compare": false, 00:09:11.866 "compare_and_write": false, 00:09:11.866 "abort": false, 00:09:11.866 "seek_hole": true, 00:09:11.866 "seek_data": true, 00:09:11.866 "copy": false, 00:09:11.866 "nvme_iov_md": false 00:09:11.866 }, 00:09:11.866 "driver_specific": { 00:09:11.866 "lvol": { 00:09:11.866 "lvol_store_uuid": "8caebeb1-56aa-4a8a-897f-95317734c450", 00:09:11.866 "base_bdev": "aio_bdev", 00:09:11.866 "thin_provision": false, 00:09:11.866 "num_allocated_clusters": 38, 00:09:11.866 "snapshot": false, 00:09:11.866 "clone": false, 00:09:11.866 "esnap_clone": false 00:09:11.866 } 00:09:11.866 } 00:09:11.866 } 00:09:11.866 ] 00:09:12.126 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:12.126 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8caebeb1-56aa-4a8a-897f-95317734c450 00:09:12.126 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:12.126 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:12.126 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8caebeb1-56aa-4a8a-897f-95317734c450 00:09:12.126 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:12.386 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:12.386 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b4b35fbb-177f-4768-838f-d5d61a7367ff 00:09:12.386 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8caebeb1-56aa-4a8a-897f-95317734c450 00:09:12.646 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:12.905 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:12.905 00:09:12.905 real 0m15.973s 00:09:12.905 user 0m15.579s 00:09:12.905 sys 0m1.547s 00:09:12.905 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.905 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:12.905 ************************************ 00:09:12.905 END TEST lvs_grow_clean 00:09:12.905 ************************************ 00:09:12.905 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:12.906 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:12.906 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.906 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.906 ************************************ 00:09:12.906 START TEST lvs_grow_dirty 00:09:12.906 ************************************ 00:09:12.906 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:12.906 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:12.906 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:12.906 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:12.906 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:12.906 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:12.906 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:12.906 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:12.906 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:12.906 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:13.165 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:13.165 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:13.426 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5f9cefb2-4825-43f9-9644-3204541a6077 00:09:13.426 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f9cefb2-4825-43f9-9644-3204541a6077 00:09:13.426 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:13.426 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:13.426 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:13.426 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5f9cefb2-4825-43f9-9644-3204541a6077 lvol 150 00:09:13.686 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=dc930aa3-a702-4e6f-92a5-e3980f78f6f1 00:09:13.686 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:13.686 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:13.946 [2024-11-27 09:41:29.195824] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:13.946 [2024-11-27 09:41:29.195866] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:13.946 true 00:09:13.946 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f9cefb2-4825-43f9-9644-3204541a6077 00:09:13.946 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:13.946 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:13.946 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:14.206 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dc930aa3-a702-4e6f-92a5-e3980f78f6f1 00:09:14.467 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:14.467 [2024-11-27 09:41:29.841698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.467 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:14.729 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3700472 00:09:14.729 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:14.729 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:14.729 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3700472 /var/tmp/bdevperf.sock 00:09:14.729 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3700472 ']' 00:09:14.729 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:14.729 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.729 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:14.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:14.729 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.729 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:14.729 [2024-11-27 09:41:30.074890] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:09:14.729 [2024-11-27 09:41:30.074942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3700472 ] 00:09:14.729 [2024-11-27 09:41:30.160221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.729 [2024-11-27 09:41:30.190070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.670 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.670 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:15.670 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:15.931 Nvme0n1 00:09:15.931 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:15.931 [ 00:09:15.931 { 00:09:15.931 "name": "Nvme0n1", 00:09:15.931 "aliases": [ 00:09:15.931 "dc930aa3-a702-4e6f-92a5-e3980f78f6f1" 00:09:15.931 ], 00:09:15.931 "product_name": "NVMe disk", 00:09:15.931 "block_size": 4096, 00:09:15.931 "num_blocks": 38912, 00:09:15.931 "uuid": "dc930aa3-a702-4e6f-92a5-e3980f78f6f1", 00:09:15.931 "numa_id": 0, 00:09:15.931 "assigned_rate_limits": { 00:09:15.931 "rw_ios_per_sec": 0, 00:09:15.931 "rw_mbytes_per_sec": 0, 00:09:15.931 "r_mbytes_per_sec": 0, 00:09:15.931 "w_mbytes_per_sec": 0 00:09:15.931 }, 00:09:15.931 "claimed": false, 00:09:15.931 "zoned": false, 00:09:15.931 "supported_io_types": { 00:09:15.931 "read": true, 00:09:15.931 "write": true, 00:09:15.931 "unmap": true, 00:09:15.931 "flush": true, 00:09:15.931 "reset": true, 00:09:15.931 "nvme_admin": true, 00:09:15.931 "nvme_io": true, 00:09:15.931 "nvme_io_md": false, 00:09:15.931 "write_zeroes": true, 00:09:15.931 "zcopy": false, 00:09:15.931 "get_zone_info": false, 00:09:15.931 "zone_management": false, 00:09:15.931 "zone_append": false, 00:09:15.931 "compare": true, 00:09:15.931 "compare_and_write": true, 00:09:15.931 "abort": true, 00:09:15.931 "seek_hole": false, 00:09:15.931 "seek_data": false, 00:09:15.931 "copy": true, 00:09:15.931 "nvme_iov_md": false 00:09:15.931 }, 00:09:15.931 "memory_domains": [ 00:09:15.931 { 00:09:15.931 "dma_device_id": "system", 00:09:15.931 "dma_device_type": 1 00:09:15.931 } 00:09:15.931 ], 00:09:15.931 "driver_specific": { 00:09:15.931 "nvme": [ 00:09:15.931 { 00:09:15.931 "trid": { 00:09:15.931 "trtype": "TCP", 00:09:15.931 "adrfam": "IPv4", 00:09:15.931 "traddr": "10.0.0.2", 00:09:15.931 "trsvcid": "4420", 00:09:15.931 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:15.931 }, 00:09:15.931 "ctrlr_data": { 00:09:15.931 "cntlid": 1, 00:09:15.931 "vendor_id": "0x8086", 00:09:15.931 "model_number": "SPDK bdev Controller", 00:09:15.931 "serial_number": "SPDK0", 00:09:15.931 "firmware_revision": "25.01", 00:09:15.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:15.931 "oacs": { 00:09:15.931 "security": 0, 00:09:15.931 "format": 0, 00:09:15.931 "firmware": 0, 00:09:15.931 "ns_manage": 0 00:09:15.931 }, 00:09:15.931 "multi_ctrlr": true, 00:09:15.931 "ana_reporting": false 00:09:15.931 }, 00:09:15.931 "vs": { 00:09:15.931 "nvme_version": "1.3" 00:09:15.931 }, 00:09:15.931 "ns_data": { 00:09:15.931 "id": 1, 00:09:15.931 "can_share": true 00:09:15.931 } 00:09:15.931 } 00:09:15.931 ], 00:09:15.931 "mp_policy": "active_passive" 00:09:15.931 } 00:09:15.931 } 00:09:15.931 ] 00:09:15.931 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3700807 00:09:15.931 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:15.931 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:16.193 Running I/O for 10 seconds... 00:09:17.134 Latency(us) 00:09:17.134 [2024-11-27T08:41:32.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.134 Nvme0n1 : 1.00 25231.00 98.56 0.00 0.00 0.00 0.00 0.00 00:09:17.134 [2024-11-27T08:41:32.600Z] =================================================================================================================== 00:09:17.134 [2024-11-27T08:41:32.600Z] Total : 25231.00 98.56 0.00 0.00 0.00 0.00 0.00 00:09:17.134 00:09:18.077 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5f9cefb2-4825-43f9-9644-3204541a6077 00:09:18.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.077 Nvme0n1 : 2.00 25317.50 98.90 0.00 0.00 0.00 0.00 0.00 00:09:18.077 [2024-11-27T08:41:33.543Z] =================================================================================================================== 00:09:18.077 [2024-11-27T08:41:33.543Z] Total : 25317.50 98.90 0.00 0.00 0.00 0.00 0.00 00:09:18.077 00:09:18.077 true 00:09:18.077 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f9cefb2-4825-43f9-9644-3204541a6077 00:09:18.077 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:18.338 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:18.338 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:18.338 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3700807 00:09:19.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.281 Nvme0n1 : 3.00 25367.33 99.09 0.00 0.00 0.00 0.00 0.00 00:09:19.281 [2024-11-27T08:41:34.747Z] =================================================================================================================== 00:09:19.281 [2024-11-27T08:41:34.747Z] Total : 25367.33 99.09 0.00 0.00 0.00 0.00 0.00 00:09:19.281 00:09:20.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.224 Nvme0n1 : 4.00 25407.50 99.25 0.00 0.00 0.00 0.00 0.00 00:09:20.224 [2024-11-27T08:41:35.690Z] =================================================================================================================== 00:09:20.224 [2024-11-27T08:41:35.690Z] Total : 25407.50 99.25 0.00 0.00 0.00 0.00 0.00 00:09:20.224 00:09:21.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.166 Nvme0n1 : 5.00 25443.60 99.39 0.00 0.00 0.00 0.00 0.00 00:09:21.166 [2024-11-27T08:41:36.632Z] =================================================================================================================== 00:09:21.166 [2024-11-27T08:41:36.632Z] Total : 25443.60 99.39 0.00 0.00 0.00 0.00 0.00 00:09:21.166 00:09:22.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.112 Nvme0n1 : 6.00 25469.17 99.49 0.00 0.00 0.00 0.00 0.00 00:09:22.112 [2024-11-27T08:41:37.578Z] =================================================================================================================== 00:09:22.112 [2024-11-27T08:41:37.578Z] Total : 25469.17 99.49 0.00 0.00 0.00 0.00 0.00 00:09:22.112 00:09:23.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.055 Nvme0n1 : 7.00 25486.86 99.56 0.00 0.00 0.00 0.00 0.00 00:09:23.055 [2024-11-27T08:41:38.521Z] =================================================================================================================== 00:09:23.055 [2024-11-27T08:41:38.521Z] Total : 25486.86 99.56 0.00 0.00 0.00 0.00 0.00 00:09:23.055 00:09:23.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.997 Nvme0n1 : 8.00 25500.75 99.61 0.00 0.00 0.00 0.00 0.00 00:09:23.997 [2024-11-27T08:41:39.463Z] =================================================================================================================== 00:09:23.997 [2024-11-27T08:41:39.463Z] Total : 25500.75 99.61 0.00 0.00 0.00 0.00 0.00 00:09:23.997 00:09:25.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.384 Nvme0n1 : 9.00 25511.33 99.65 0.00 0.00 0.00 0.00 0.00 00:09:25.384 [2024-11-27T08:41:40.850Z] =================================================================================================================== 00:09:25.384 [2024-11-27T08:41:40.850Z] Total : 25511.33 99.65 0.00 0.00 0.00 0.00 0.00 00:09:25.384 00:09:26.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.327 Nvme0n1 : 10.00 25526.10 99.71 0.00 0.00 0.00 0.00 0.00 00:09:26.327 [2024-11-27T08:41:41.793Z] =================================================================================================================== 00:09:26.327 [2024-11-27T08:41:41.793Z] Total : 25526.10 99.71 0.00 0.00 0.00 0.00 0.00 00:09:26.327 00:09:26.327 00:09:26.327 Latency(us) 00:09:26.327 [2024-11-27T08:41:41.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.327 Nvme0n1 : 10.00 25525.27 99.71 0.00 0.00 5011.40 1536.00 8792.75 00:09:26.327 [2024-11-27T08:41:41.793Z] =================================================================================================================== 00:09:26.327 [2024-11-27T08:41:41.793Z] Total : 25525.27 99.71 0.00 0.00 5011.40 1536.00 8792.75 00:09:26.327 { 00:09:26.327 "results": [ 00:09:26.327 { 00:09:26.327 "job": "Nvme0n1", 00:09:26.327 "core_mask": "0x2", 00:09:26.327 "workload": "randwrite", 00:09:26.327 "status": "finished", 00:09:26.327 "queue_depth": 128, 00:09:26.327 "io_size": 4096, 00:09:26.327 "runtime": 10.002872, 00:09:26.327 "iops": 25525.269142702215, 00:09:26.327 "mibps": 99.70808258868053, 00:09:26.327 "io_failed": 0, 00:09:26.327 "io_timeout": 0, 00:09:26.327 "avg_latency_us": 5011.4027905762305, 00:09:26.327 "min_latency_us": 1536.0, 00:09:26.327 "max_latency_us": 8792.746666666666 00:09:26.327 } 00:09:26.327 ], 00:09:26.327 "core_count": 1 00:09:26.327 } 00:09:26.327 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3700472 00:09:26.327 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3700472 ']' 00:09:26.327 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3700472 00:09:26.327 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:26.327 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.327 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3700472 00:09:26.327 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:26.327 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:26.327 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3700472' 00:09:26.327 killing process with pid 3700472 00:09:26.327 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3700472 00:09:26.327 Received shutdown signal, test time was about 10.000000 seconds 00:09:26.327 00:09:26.327 Latency(us) 00:09:26.327 [2024-11-27T08:41:41.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.327 [2024-11-27T08:41:41.793Z] =================================================================================================================== 00:09:26.327 [2024-11-27T08:41:41.793Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:26.327 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3700472 00:09:26.327 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:26.588 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:26.588 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f9cefb2-4825-43f9-9644-3204541a6077 00:09:26.588 09:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:26.848 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:26.848 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:26.848 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3696665 00:09:26.848 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3696665 00:09:26.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3696665 Killed "${NVMF_APP[@]}" "$@" 00:09:26.848 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:26.848 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:26.848 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.848 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.848 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:26.848 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3702846 00:09:26.849 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3702846 00:09:26.849 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:26.849 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3702846 ']' 00:09:26.849 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.849 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.849 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.849 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.849 09:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:26.849 [2024-11-27 09:41:42.248079] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:09:26.849 [2024-11-27 09:41:42.248164] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.110 [2024-11-27 09:41:42.338039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.110 [2024-11-27 09:41:42.367050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.110 [2024-11-27 09:41:42.367079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.110 [2024-11-27 09:41:42.367084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.110 [2024-11-27 09:41:42.367089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.110 [2024-11-27 09:41:42.367093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.110 [2024-11-27 09:41:42.367572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.682 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.682 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:27.682 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.682 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.682 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.682 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.682 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:27.943 [2024-11-27 09:41:43.224773] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:27.943 [2024-11-27 09:41:43.224842] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:27.943 [2024-11-27 09:41:43.224864] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:27.943 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:27.944 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev dc930aa3-a702-4e6f-92a5-e3980f78f6f1 00:09:27.944 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=dc930aa3-a702-4e6f-92a5-e3980f78f6f1 00:09:27.944 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.944 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:27.944 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.944 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.944 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:28.205 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dc930aa3-a702-4e6f-92a5-e3980f78f6f1 -t 2000 00:09:28.205 [ 00:09:28.205 { 00:09:28.205 "name": "dc930aa3-a702-4e6f-92a5-e3980f78f6f1", 00:09:28.205 "aliases": [ 00:09:28.205 "lvs/lvol" 00:09:28.205 ], 00:09:28.205 "product_name": "Logical Volume", 00:09:28.205 "block_size": 4096, 00:09:28.205 "num_blocks": 38912, 00:09:28.205 "uuid": "dc930aa3-a702-4e6f-92a5-e3980f78f6f1", 00:09:28.205 "assigned_rate_limits": { 00:09:28.205 "rw_ios_per_sec": 0, 00:09:28.205 "rw_mbytes_per_sec": 0, 00:09:28.205 "r_mbytes_per_sec": 0, 00:09:28.205 "w_mbytes_per_sec": 0 00:09:28.205 }, 00:09:28.205 "claimed": false, 00:09:28.205 "zoned": false, 00:09:28.205 "supported_io_types": { 00:09:28.205 "read": true, 00:09:28.205 "write": true, 00:09:28.205 "unmap": true, 00:09:28.205 "flush": false, 00:09:28.205 "reset": true, 00:09:28.205 "nvme_admin": false, 00:09:28.205 "nvme_io": false, 00:09:28.205 "nvme_io_md": false, 00:09:28.205 "write_zeroes": true, 00:09:28.205 "zcopy": false, 00:09:28.205 "get_zone_info": false, 00:09:28.205 "zone_management": false, 00:09:28.205 "zone_append": false, 00:09:28.205 "compare": false, 00:09:28.205 "compare_and_write": false, 00:09:28.205 "abort": false, 00:09:28.205 "seek_hole": true, 00:09:28.205 "seek_data": true, 00:09:28.205 "copy": false, 00:09:28.205 "nvme_iov_md": false 00:09:28.205 }, 00:09:28.205 "driver_specific": { 00:09:28.205 "lvol": { 00:09:28.205 "lvol_store_uuid": "5f9cefb2-4825-43f9-9644-3204541a6077", 00:09:28.205 "base_bdev": "aio_bdev", 00:09:28.205 "thin_provision": false, 00:09:28.205 "num_allocated_clusters": 38, 00:09:28.205 "snapshot": false, 00:09:28.205 "clone": false, 00:09:28.205 "esnap_clone": false 00:09:28.205 } 00:09:28.205 } 00:09:28.205 } 00:09:28.205 ] 00:09:28.205 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:28.205 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f9cefb2-4825-43f9-9644-3204541a6077 00:09:28.205 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:28.466 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:28.466 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f9cefb2-4825-43f9-9644-3204541a6077 00:09:28.466 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:28.466 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:28.466 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:28.726 [2024-11-27 09:41:44.049336] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:28.726 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f9cefb2-4825-43f9-9644-3204541a6077 00:09:28.726 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:28.726 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f9cefb2-4825-43f9-9644-3204541a6077 00:09:28.726 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:28.726 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.726 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:28.726 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.726 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:28.726 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.726 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:28.726 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:28.726 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f9cefb2-4825-43f9-9644-3204541a6077 00:09:28.988 request: 00:09:28.988 { 00:09:28.988 "uuid": "5f9cefb2-4825-43f9-9644-3204541a6077", 00:09:28.988 "method": "bdev_lvol_get_lvstores", 00:09:28.988 "req_id": 1 00:09:28.988 } 00:09:28.988 Got JSON-RPC error response 00:09:28.988 response: 00:09:28.988 { 00:09:28.988 "code": -19, 00:09:28.988 "message": "No such device" 00:09:28.988 } 00:09:28.988 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:28.988 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.988 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:28.988 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.988 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:28.988 aio_bdev 00:09:28.988 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dc930aa3-a702-4e6f-92a5-e3980f78f6f1 00:09:28.988 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=dc930aa3-a702-4e6f-92a5-e3980f78f6f1 00:09:28.988 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.988 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:28.988 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.988 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.988 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:29.250 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dc930aa3-a702-4e6f-92a5-e3980f78f6f1 -t 2000 00:09:29.511 [ 00:09:29.511 { 00:09:29.511 "name": "dc930aa3-a702-4e6f-92a5-e3980f78f6f1", 00:09:29.511 "aliases": [ 00:09:29.511 "lvs/lvol" 00:09:29.511 ], 00:09:29.511 "product_name": "Logical Volume", 00:09:29.511 "block_size": 4096, 00:09:29.511 "num_blocks": 38912, 00:09:29.511 "uuid": "dc930aa3-a702-4e6f-92a5-e3980f78f6f1", 00:09:29.511 "assigned_rate_limits": { 00:09:29.511 "rw_ios_per_sec": 0, 00:09:29.511 "rw_mbytes_per_sec": 0, 00:09:29.511 "r_mbytes_per_sec": 0, 00:09:29.511 "w_mbytes_per_sec": 0 00:09:29.511 }, 00:09:29.511 "claimed": false, 00:09:29.511 "zoned": false, 00:09:29.511 "supported_io_types": { 00:09:29.511 "read": true, 00:09:29.511 "write": true, 00:09:29.511 "unmap": true, 00:09:29.511 "flush": false, 00:09:29.511 "reset": true, 00:09:29.511 "nvme_admin": false, 00:09:29.511 "nvme_io": false, 00:09:29.511 "nvme_io_md": false, 00:09:29.511 "write_zeroes": true, 00:09:29.511 "zcopy": false, 00:09:29.511 "get_zone_info": false, 00:09:29.511 "zone_management": false, 00:09:29.511 "zone_append": false, 00:09:29.511 "compare": false, 00:09:29.511 "compare_and_write": false, 00:09:29.511 "abort": false, 00:09:29.511 "seek_hole": true, 00:09:29.511 "seek_data": true, 00:09:29.511 "copy": false, 00:09:29.511 "nvme_iov_md": false 00:09:29.511 }, 00:09:29.511 "driver_specific": { 00:09:29.511 "lvol": { 00:09:29.511 "lvol_store_uuid": "5f9cefb2-4825-43f9-9644-3204541a6077", 00:09:29.511 "base_bdev": "aio_bdev", 00:09:29.511 "thin_provision": false, 00:09:29.511 "num_allocated_clusters": 38, 00:09:29.511 "snapshot": false, 00:09:29.511 "clone": false, 00:09:29.511 "esnap_clone": false 00:09:29.511 } 00:09:29.511 } 00:09:29.511 } 00:09:29.511 ] 00:09:29.511 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:29.511 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f9cefb2-4825-43f9-9644-3204541a6077 00:09:29.511 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:29.511 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:29.511 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f9cefb2-4825-43f9-9644-3204541a6077 00:09:29.511 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:29.772 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:29.772 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dc930aa3-a702-4e6f-92a5-e3980f78f6f1 00:09:30.032 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f9cefb2-4825-43f9-9644-3204541a6077 00:09:30.032 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:30.295 00:09:30.295 real 0m17.303s 00:09:30.295 user 0m45.456s 00:09:30.295 sys 0m3.216s 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:30.295 ************************************ 00:09:30.295 END TEST lvs_grow_dirty 00:09:30.295 ************************************ 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:30.295 nvmf_trace.0 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.295 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.295 rmmod nvme_tcp 00:09:30.295 rmmod nvme_fabrics 00:09:30.295 rmmod nvme_keyring 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3702846 ']' 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3702846 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3702846 ']' 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3702846 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3702846 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3702846' 00:09:30.558 killing process with pid 3702846 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3702846 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3702846 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.558 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.106 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:33.106 00:09:33.106 real 0m44.621s 00:09:33.106 user 1m7.332s 00:09:33.106 sys 0m10.865s 00:09:33.106 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.106 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:33.106 ************************************ 00:09:33.106 END TEST nvmf_lvs_grow 00:09:33.106 ************************************ 00:09:33.106 09:41:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:33.106 09:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:33.106 09:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.107 ************************************ 00:09:33.107 START TEST nvmf_bdev_io_wait 00:09:33.107 ************************************ 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:33.107 * Looking for test storage... 00:09:33.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.107 --rc genhtml_branch_coverage=1 00:09:33.107 --rc genhtml_function_coverage=1 00:09:33.107 --rc genhtml_legend=1 00:09:33.107 --rc geninfo_all_blocks=1 00:09:33.107 --rc geninfo_unexecuted_blocks=1 00:09:33.107 00:09:33.107 ' 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.107 --rc genhtml_branch_coverage=1 00:09:33.107 --rc genhtml_function_coverage=1 00:09:33.107 --rc genhtml_legend=1 00:09:33.107 --rc geninfo_all_blocks=1 00:09:33.107 --rc geninfo_unexecuted_blocks=1 00:09:33.107 00:09:33.107 ' 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.107 --rc genhtml_branch_coverage=1 00:09:33.107 --rc genhtml_function_coverage=1 00:09:33.107 --rc genhtml_legend=1 00:09:33.107 --rc geninfo_all_blocks=1 00:09:33.107 --rc geninfo_unexecuted_blocks=1 00:09:33.107 00:09:33.107 ' 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.107 --rc genhtml_branch_coverage=1 00:09:33.107 --rc genhtml_function_coverage=1 00:09:33.107 --rc genhtml_legend=1 00:09:33.107 --rc geninfo_all_blocks=1 00:09:33.107 --rc geninfo_unexecuted_blocks=1 00:09:33.107 00:09:33.107 ' 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.107 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:33.108 09:41:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:41.251 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:41.252 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:41.252 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:41.252 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:41.252 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:41.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:09:41.252 00:09:41.252 --- 10.0.0.2 ping statistics --- 00:09:41.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.252 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:09:41.252 00:09:41.252 --- 10.0.0.1 ping statistics --- 00:09:41.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.252 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.252 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3707913 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3707913 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3707913 ']' 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.253 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.253 [2024-11-27 09:41:55.921826] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:09:41.253 [2024-11-27 09:41:55.921892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.253 [2024-11-27 09:41:56.022969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.253 [2024-11-27 09:41:56.077397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.253 [2024-11-27 09:41:56.077453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.253 [2024-11-27 09:41:56.077462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.253 [2024-11-27 09:41:56.077470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.253 [2024-11-27 09:41:56.077477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.253 [2024-11-27 09:41:56.079897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.253 [2024-11-27 09:41:56.080057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.253 [2024-11-27 09:41:56.080217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.253 [2024-11-27 09:41:56.080273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.514 [2024-11-27 09:41:56.867781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.514 Malloc0 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.514 [2024-11-27 09:41:56.916393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3708127 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3708130 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:41.514 { 00:09:41.514 "params": { 00:09:41.514 "name": "Nvme$subsystem", 00:09:41.514 "trtype": "$TEST_TRANSPORT", 00:09:41.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.514 "adrfam": "ipv4", 00:09:41.514 "trsvcid": "$NVMF_PORT", 00:09:41.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.514 "hdgst": ${hdgst:-false}, 00:09:41.514 "ddgst": ${ddgst:-false} 00:09:41.514 }, 00:09:41.514 "method": "bdev_nvme_attach_controller" 00:09:41.514 } 00:09:41.514 EOF 00:09:41.514 )") 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3708133 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:41.514 { 00:09:41.514 "params": { 00:09:41.514 "name": "Nvme$subsystem", 00:09:41.514 "trtype": "$TEST_TRANSPORT", 00:09:41.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.514 "adrfam": "ipv4", 00:09:41.514 "trsvcid": "$NVMF_PORT", 00:09:41.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.514 "hdgst": ${hdgst:-false}, 00:09:41.514 "ddgst": ${ddgst:-false} 00:09:41.514 }, 00:09:41.514 "method": "bdev_nvme_attach_controller" 00:09:41.514 } 00:09:41.514 EOF 00:09:41.514 )") 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3708137 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:41.514 { 00:09:41.514 "params": { 00:09:41.514 "name": "Nvme$subsystem", 00:09:41.514 "trtype": "$TEST_TRANSPORT", 00:09:41.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.514 "adrfam": "ipv4", 00:09:41.514 "trsvcid": "$NVMF_PORT", 00:09:41.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.514 "hdgst": ${hdgst:-false}, 00:09:41.514 "ddgst": ${ddgst:-false} 00:09:41.514 }, 00:09:41.514 "method": "bdev_nvme_attach_controller" 00:09:41.514 } 00:09:41.514 EOF 00:09:41.514 )") 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:41.514 { 00:09:41.514 "params": { 00:09:41.514 "name": "Nvme$subsystem", 00:09:41.514 "trtype": "$TEST_TRANSPORT", 00:09:41.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.514 "adrfam": "ipv4", 00:09:41.514 "trsvcid": "$NVMF_PORT", 00:09:41.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.514 "hdgst": ${hdgst:-false}, 00:09:41.514 "ddgst": ${ddgst:-false} 00:09:41.514 }, 00:09:41.514 "method": "bdev_nvme_attach_controller" 00:09:41.514 } 00:09:41.514 EOF 00:09:41.514 )") 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:41.514 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3708127 00:09:41.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:41.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:41.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:41.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:41.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:41.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:41.515 "params": { 00:09:41.515 "name": "Nvme1", 00:09:41.515 "trtype": "tcp", 00:09:41.515 "traddr": "10.0.0.2", 00:09:41.515 "adrfam": "ipv4", 00:09:41.515 "trsvcid": "4420", 00:09:41.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.515 "hdgst": false, 00:09:41.515 "ddgst": false 00:09:41.515 }, 00:09:41.515 "method": "bdev_nvme_attach_controller" 00:09:41.515 }' 00:09:41.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:41.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:41.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:41.515 "params": { 00:09:41.515 "name": "Nvme1", 00:09:41.515 "trtype": "tcp", 00:09:41.515 "traddr": "10.0.0.2", 00:09:41.515 "adrfam": "ipv4", 00:09:41.515 "trsvcid": "4420", 00:09:41.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.515 "hdgst": false, 00:09:41.515 "ddgst": false 00:09:41.515 }, 00:09:41.515 "method": "bdev_nvme_attach_controller" 00:09:41.515 }' 00:09:41.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:41.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:41.515 "params": { 00:09:41.515 "name": "Nvme1", 00:09:41.515 "trtype": "tcp", 00:09:41.515 "traddr": "10.0.0.2", 00:09:41.515 "adrfam": "ipv4", 00:09:41.515 "trsvcid": "4420", 00:09:41.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.515 "hdgst": false, 00:09:41.515 "ddgst": false 00:09:41.515 }, 00:09:41.515 "method": "bdev_nvme_attach_controller" 00:09:41.515 }' 00:09:41.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:41.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:41.515 "params": { 00:09:41.515 "name": "Nvme1", 00:09:41.515 "trtype": "tcp", 00:09:41.515 "traddr": "10.0.0.2", 00:09:41.515 "adrfam": "ipv4", 00:09:41.515 "trsvcid": "4420", 00:09:41.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.515 "hdgst": false, 00:09:41.515 "ddgst": false 00:09:41.515 }, 00:09:41.515 "method": "bdev_nvme_attach_controller" 00:09:41.515 }' 00:09:41.515 [2024-11-27 09:41:56.974279] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:09:41.515 [2024-11-27 09:41:56.974350] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:41.515 [2024-11-27 09:41:56.977962] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:09:41.515 [2024-11-27 09:41:56.978030] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:41.775 [2024-11-27 09:41:56.981472] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:09:41.775 [2024-11-27 09:41:56.981560] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:41.775 [2024-11-27 09:41:56.987606] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:09:41.775 [2024-11-27 09:41:56.987695] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:41.775 [2024-11-27 09:41:57.156426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.775 [2024-11-27 09:41:57.195226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:42.035 [2024-11-27 09:41:57.266429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.035 [2024-11-27 09:41:57.307228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:42.035 [2024-11-27 09:41:57.314986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.035 [2024-11-27 09:41:57.350723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:42.035 [2024-11-27 09:41:57.388385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.035 [2024-11-27 09:41:57.426613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:42.295 Running I/O for 1 seconds... 00:09:42.295 Running I/O for 1 seconds... 00:09:42.295 Running I/O for 1 seconds... 00:09:42.295 Running I/O for 1 seconds... 00:09:43.237 6492.00 IOPS, 25.36 MiB/s 00:09:43.237 Latency(us) 00:09:43.237 [2024-11-27T08:41:58.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.237 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:43.237 Nvme1n1 : 1.02 6502.35 25.40 0.00 0.00 19503.02 7864.32 27962.03 00:09:43.237 [2024-11-27T08:41:58.703Z] =================================================================================================================== 00:09:43.237 [2024-11-27T08:41:58.703Z] Total : 6502.35 25.40 0.00 0.00 19503.02 7864.32 27962.03 00:09:43.237 182416.00 IOPS, 712.56 MiB/s [2024-11-27T08:41:58.703Z] 6350.00 IOPS, 24.80 MiB/s 00:09:43.237 Latency(us) 00:09:43.237 [2024-11-27T08:41:58.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.237 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:43.237 Nvme1n1 : 1.00 182052.43 711.14 0.00 0.00 699.00 300.37 1966.08 00:09:43.237 [2024-11-27T08:41:58.703Z] =================================================================================================================== 00:09:43.237 [2024-11-27T08:41:58.703Z] Total : 182052.43 711.14 0.00 0.00 699.00 300.37 1966.08 00:09:43.237 00:09:43.237 Latency(us) 00:09:43.237 [2024-11-27T08:41:58.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.237 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:43.237 Nvme1n1 : 1.01 6460.14 25.23 0.00 0.00 19749.97 5242.88 34078.72 00:09:43.237 [2024-11-27T08:41:58.703Z] =================================================================================================================== 00:09:43.237 [2024-11-27T08:41:58.703Z] Total : 6460.14 25.23 0.00 0.00 19749.97 5242.88 34078.72 00:09:43.498 11981.00 IOPS, 46.80 MiB/s 00:09:43.498 Latency(us) 00:09:43.498 [2024-11-27T08:41:58.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.498 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:43.498 Nvme1n1 : 1.01 12054.50 47.09 0.00 0.00 10583.66 4560.21 21626.88 00:09:43.498 [2024-11-27T08:41:58.964Z] =================================================================================================================== 00:09:43.498 [2024-11-27T08:41:58.964Z] Total : 12054.50 47.09 0.00 0.00 10583.66 4560.21 21626.88 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3708130 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3708133 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3708137 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.498 rmmod nvme_tcp 00:09:43.498 rmmod nvme_fabrics 00:09:43.498 rmmod nvme_keyring 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3707913 ']' 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3707913 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3707913 ']' 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3707913 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.498 09:41:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3707913 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3707913' 00:09:43.759 killing process with pid 3707913 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3707913 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3707913 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.759 09:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:46.304 00:09:46.304 real 0m13.121s 00:09:46.304 user 0m20.100s 00:09:46.304 sys 0m7.327s 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.304 ************************************ 00:09:46.304 END TEST nvmf_bdev_io_wait 00:09:46.304 ************************************ 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.304 ************************************ 00:09:46.304 START TEST nvmf_queue_depth 00:09:46.304 ************************************ 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:46.304 * Looking for test storage... 00:09:46.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:46.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.304 --rc genhtml_branch_coverage=1 00:09:46.304 --rc genhtml_function_coverage=1 00:09:46.304 --rc genhtml_legend=1 00:09:46.304 --rc geninfo_all_blocks=1 00:09:46.304 --rc geninfo_unexecuted_blocks=1 00:09:46.304 00:09:46.304 ' 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:46.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.304 --rc genhtml_branch_coverage=1 00:09:46.304 --rc genhtml_function_coverage=1 00:09:46.304 --rc genhtml_legend=1 00:09:46.304 --rc geninfo_all_blocks=1 00:09:46.304 --rc geninfo_unexecuted_blocks=1 00:09:46.304 00:09:46.304 ' 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:46.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.304 --rc genhtml_branch_coverage=1 00:09:46.304 --rc genhtml_function_coverage=1 00:09:46.304 --rc genhtml_legend=1 00:09:46.304 --rc geninfo_all_blocks=1 00:09:46.304 --rc geninfo_unexecuted_blocks=1 00:09:46.304 00:09:46.304 ' 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:46.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.304 --rc genhtml_branch_coverage=1 00:09:46.304 --rc genhtml_function_coverage=1 00:09:46.304 --rc genhtml_legend=1 00:09:46.304 --rc geninfo_all_blocks=1 00:09:46.304 --rc geninfo_unexecuted_blocks=1 00:09:46.304 00:09:46.304 ' 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.304 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:46.305 09:42:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.571 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:54.572 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:54.572 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:54.572 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:54.572 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:54.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:09:54.572 00:09:54.572 --- 10.0.0.2 ping statistics --- 00:09:54.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.572 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:09:54.572 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:09:54.572 00:09:54.572 --- 10.0.0.1 ping statistics --- 00:09:54.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.572 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3713031 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3713031 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3713031 ']' 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.572 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.572 [2024-11-27 09:42:09.126497] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:09:54.572 [2024-11-27 09:42:09.126567] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.573 [2024-11-27 09:42:09.228984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.573 [2024-11-27 09:42:09.281507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.573 [2024-11-27 09:42:09.281559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.573 [2024-11-27 09:42:09.281571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.573 [2024-11-27 09:42:09.281581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.573 [2024-11-27 09:42:09.281589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.573 [2024-11-27 09:42:09.282455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.573 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.573 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:54.573 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.573 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.573 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.573 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.573 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.573 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 [2024-11-27 09:42:10.011958] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.573 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.573 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:54.573 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.573 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.835 Malloc0 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.835 [2024-11-27 09:42:10.073955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3713452 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3713452 /var/tmp/bdevperf.sock 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3713452 ']' 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:54.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.835 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.835 [2024-11-27 09:42:10.132419] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:09:54.835 [2024-11-27 09:42:10.132484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3713452 ] 00:09:54.835 [2024-11-27 09:42:10.223573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.835 [2024-11-27 09:42:10.276592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.779 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.780 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:55.780 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:55.780 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.780 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.780 NVMe0n1 00:09:55.780 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.780 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:56.041 Running I/O for 10 seconds... 00:09:57.924 10347.00 IOPS, 40.42 MiB/s [2024-11-27T08:42:14.335Z] 11144.00 IOPS, 43.53 MiB/s [2024-11-27T08:42:15.721Z] 11268.33 IOPS, 44.02 MiB/s [2024-11-27T08:42:16.663Z] 11405.25 IOPS, 44.55 MiB/s [2024-11-27T08:42:17.604Z] 11867.80 IOPS, 46.36 MiB/s [2024-11-27T08:42:18.545Z] 12118.50 IOPS, 47.34 MiB/s [2024-11-27T08:42:19.489Z] 12294.29 IOPS, 48.02 MiB/s [2024-11-27T08:42:20.432Z] 12456.62 IOPS, 48.66 MiB/s [2024-11-27T08:42:21.392Z] 12626.44 IOPS, 49.32 MiB/s [2024-11-27T08:42:21.392Z] 12757.90 IOPS, 49.84 MiB/s 00:10:05.926 Latency(us) 00:10:05.926 [2024-11-27T08:42:21.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.926 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:05.926 Verification LBA range: start 0x0 length 0x4000 00:10:05.926 NVMe0n1 : 10.05 12776.02 49.91 0.00 0.00 79838.57 16384.00 68157.44 00:10:05.926 [2024-11-27T08:42:21.392Z] =================================================================================================================== 00:10:05.926 [2024-11-27T08:42:21.392Z] Total : 12776.02 49.91 0.00 0.00 79838.57 16384.00 68157.44 00:10:05.926 { 00:10:05.926 "results": [ 00:10:05.926 { 00:10:05.926 "job": "NVMe0n1", 00:10:05.926 "core_mask": "0x1", 00:10:05.926 "workload": "verify", 00:10:05.926 "status": "finished", 00:10:05.926 "verify_range": { 00:10:05.926 "start": 0, 00:10:05.926 "length": 16384 00:10:05.926 }, 00:10:05.926 "queue_depth": 1024, 00:10:05.926 "io_size": 4096, 00:10:05.926 "runtime": 10.053442, 00:10:05.926 "iops": 12776.022381190442, 00:10:05.926 "mibps": 49.90633742652516, 00:10:05.926 "io_failed": 0, 00:10:05.926 "io_timeout": 0, 00:10:05.926 "avg_latency_us": 79838.57392555453, 00:10:05.926 "min_latency_us": 16384.0, 00:10:05.926 "max_latency_us": 68157.44 00:10:05.926 } 00:10:05.926 ], 00:10:05.926 "core_count": 1 00:10:05.926 } 00:10:05.926 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3713452 00:10:05.926 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3713452 ']' 00:10:05.926 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3713452 00:10:05.926 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:05.926 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3713452 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3713452' 00:10:06.187 killing process with pid 3713452 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3713452 00:10:06.187 Received shutdown signal, test time was about 10.000000 seconds 00:10:06.187 00:10:06.187 Latency(us) 00:10:06.187 [2024-11-27T08:42:21.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.187 [2024-11-27T08:42:21.653Z] =================================================================================================================== 00:10:06.187 [2024-11-27T08:42:21.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3713452 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.187 rmmod nvme_tcp 00:10:06.187 rmmod nvme_fabrics 00:10:06.187 rmmod nvme_keyring 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3713031 ']' 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3713031 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3713031 ']' 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3713031 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.187 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3713031 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3713031' 00:10:06.448 killing process with pid 3713031 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3713031 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3713031 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.448 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.997 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:08.997 00:10:08.997 real 0m22.562s 00:10:08.997 user 0m25.854s 00:10:08.997 sys 0m7.132s 00:10:08.997 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.997 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.997 ************************************ 00:10:08.997 END TEST nvmf_queue_depth 00:10:08.997 ************************************ 00:10:08.997 09:42:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:08.997 09:42:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.997 09:42:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.997 09:42:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.997 ************************************ 00:10:08.997 START TEST nvmf_target_multipath 00:10:08.997 ************************************ 00:10:08.997 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:08.997 * Looking for test storage... 00:10:08.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.997 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.998 --rc genhtml_branch_coverage=1 00:10:08.998 --rc genhtml_function_coverage=1 00:10:08.998 --rc genhtml_legend=1 00:10:08.998 --rc geninfo_all_blocks=1 00:10:08.998 --rc geninfo_unexecuted_blocks=1 00:10:08.998 00:10:08.998 ' 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.998 --rc genhtml_branch_coverage=1 00:10:08.998 --rc genhtml_function_coverage=1 00:10:08.998 --rc genhtml_legend=1 00:10:08.998 --rc geninfo_all_blocks=1 00:10:08.998 --rc geninfo_unexecuted_blocks=1 00:10:08.998 00:10:08.998 ' 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.998 --rc genhtml_branch_coverage=1 00:10:08.998 --rc genhtml_function_coverage=1 00:10:08.998 --rc genhtml_legend=1 00:10:08.998 --rc geninfo_all_blocks=1 00:10:08.998 --rc geninfo_unexecuted_blocks=1 00:10:08.998 00:10:08.998 ' 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.998 --rc genhtml_branch_coverage=1 00:10:08.998 --rc genhtml_function_coverage=1 00:10:08.998 --rc genhtml_legend=1 00:10:08.998 --rc geninfo_all_blocks=1 00:10:08.998 --rc geninfo_unexecuted_blocks=1 00:10:08.998 00:10:08.998 ' 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:08.998 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:17.148 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:17.148 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:17.148 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:17.148 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:17.148 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:17.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:10:17.149 00:10:17.149 --- 10.0.0.2 ping statistics --- 00:10:17.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.149 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:10:17.149 00:10:17.149 --- 10.0.0.1 ping statistics --- 00:10:17.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.149 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:17.149 only one NIC for nvmf test 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.149 rmmod nvme_tcp 00:10:17.149 rmmod nvme_fabrics 00:10:17.149 rmmod nvme_keyring 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.149 09:42:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.536 00:10:18.536 real 0m9.986s 00:10:18.536 user 0m2.128s 00:10:18.536 sys 0m5.797s 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:18.536 ************************************ 00:10:18.536 END TEST nvmf_target_multipath 00:10:18.536 ************************************ 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.536 09:42:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.798 ************************************ 00:10:18.798 START TEST nvmf_zcopy 00:10:18.798 ************************************ 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:18.798 * Looking for test storage... 00:10:18.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:18.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.798 --rc genhtml_branch_coverage=1 00:10:18.798 --rc genhtml_function_coverage=1 00:10:18.798 --rc genhtml_legend=1 00:10:18.798 --rc geninfo_all_blocks=1 00:10:18.798 --rc geninfo_unexecuted_blocks=1 00:10:18.798 00:10:18.798 ' 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:18.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.798 --rc genhtml_branch_coverage=1 00:10:18.798 --rc genhtml_function_coverage=1 00:10:18.798 --rc genhtml_legend=1 00:10:18.798 --rc geninfo_all_blocks=1 00:10:18.798 --rc geninfo_unexecuted_blocks=1 00:10:18.798 00:10:18.798 ' 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:18.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.798 --rc genhtml_branch_coverage=1 00:10:18.798 --rc genhtml_function_coverage=1 00:10:18.798 --rc genhtml_legend=1 00:10:18.798 --rc geninfo_all_blocks=1 00:10:18.798 --rc geninfo_unexecuted_blocks=1 00:10:18.798 00:10:18.798 ' 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:18.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.798 --rc genhtml_branch_coverage=1 00:10:18.798 --rc genhtml_function_coverage=1 00:10:18.798 --rc genhtml_legend=1 00:10:18.798 --rc geninfo_all_blocks=1 00:10:18.798 --rc geninfo_unexecuted_blocks=1 00:10:18.798 00:10:18.798 ' 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.798 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.799 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:19.060 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:27.203 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:27.203 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.203 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:27.204 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:27.204 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:27.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:10:27.204 00:10:27.204 --- 10.0.0.2 ping statistics --- 00:10:27.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.204 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:10:27.204 00:10:27.204 --- 10.0.0.1 ping statistics --- 00:10:27.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.204 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3724277 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3724277 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3724277 ']' 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.204 09:42:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.204 [2024-11-27 09:42:41.906896] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:10:27.204 [2024-11-27 09:42:41.906966] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.204 [2024-11-27 09:42:42.009480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.204 [2024-11-27 09:42:42.061812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.204 [2024-11-27 09:42:42.061865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.204 [2024-11-27 09:42:42.061875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.204 [2024-11-27 09:42:42.061885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.204 [2024-11-27 09:42:42.061894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.204 [2024-11-27 09:42:42.062745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.466 [2024-11-27 09:42:42.774771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.466 [2024-11-27 09:42:42.799039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.466 malloc0 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:27.466 { 00:10:27.466 "params": { 00:10:27.466 "name": "Nvme$subsystem", 00:10:27.466 "trtype": "$TEST_TRANSPORT", 00:10:27.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:27.466 "adrfam": "ipv4", 00:10:27.466 "trsvcid": "$NVMF_PORT", 00:10:27.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:27.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:27.466 "hdgst": ${hdgst:-false}, 00:10:27.466 "ddgst": ${ddgst:-false} 00:10:27.466 }, 00:10:27.466 "method": "bdev_nvme_attach_controller" 00:10:27.466 } 00:10:27.466 EOF 00:10:27.466 )") 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:27.466 09:42:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:27.466 "params": { 00:10:27.466 "name": "Nvme1", 00:10:27.466 "trtype": "tcp", 00:10:27.466 "traddr": "10.0.0.2", 00:10:27.466 "adrfam": "ipv4", 00:10:27.466 "trsvcid": "4420", 00:10:27.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:27.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:27.466 "hdgst": false, 00:10:27.466 "ddgst": false 00:10:27.466 }, 00:10:27.466 "method": "bdev_nvme_attach_controller" 00:10:27.466 }' 00:10:27.466 [2024-11-27 09:42:42.909540] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:10:27.466 [2024-11-27 09:42:42.909609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724626 ] 00:10:27.728 [2024-11-27 09:42:43.004340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.728 [2024-11-27 09:42:43.057008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.989 Running I/O for 10 seconds... 00:10:30.323 6450.00 IOPS, 50.39 MiB/s [2024-11-27T08:42:46.363Z] 6516.50 IOPS, 50.91 MiB/s [2024-11-27T08:42:47.771Z] 6526.00 IOPS, 50.98 MiB/s [2024-11-27T08:42:48.713Z] 7083.50 IOPS, 55.34 MiB/s [2024-11-27T08:42:49.656Z] 7621.40 IOPS, 59.54 MiB/s [2024-11-27T08:42:50.602Z] 7977.00 IOPS, 62.32 MiB/s [2024-11-27T08:42:51.544Z] 8228.00 IOPS, 64.28 MiB/s [2024-11-27T08:42:52.486Z] 8414.12 IOPS, 65.74 MiB/s [2024-11-27T08:42:53.429Z] 8563.89 IOPS, 66.91 MiB/s [2024-11-27T08:42:53.429Z] 8680.80 IOPS, 67.82 MiB/s 00:10:37.963 Latency(us) 00:10:37.963 [2024-11-27T08:42:53.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.963 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:37.963 Verification LBA range: start 0x0 length 0x1000 00:10:37.963 Nvme1n1 : 10.01 8685.03 67.85 0.00 0.00 14692.53 2102.61 28398.93 00:10:37.963 [2024-11-27T08:42:53.429Z] =================================================================================================================== 00:10:37.963 [2024-11-27T08:42:53.429Z] Total : 8685.03 67.85 0.00 0.00 14692.53 2102.61 28398.93 00:10:38.224 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3726643 00:10:38.224 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:38.224 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:38.224 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:38.224 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:38.224 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:38.224 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:38.224 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:38.224 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:38.224 { 00:10:38.224 "params": { 00:10:38.224 "name": "Nvme$subsystem", 00:10:38.224 "trtype": "$TEST_TRANSPORT", 00:10:38.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:38.224 "adrfam": "ipv4", 00:10:38.224 "trsvcid": "$NVMF_PORT", 00:10:38.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:38.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:38.224 "hdgst": ${hdgst:-false}, 00:10:38.224 "ddgst": ${ddgst:-false} 00:10:38.224 }, 00:10:38.224 "method": "bdev_nvme_attach_controller" 00:10:38.224 } 00:10:38.224 EOF 00:10:38.224 )") 00:10:38.224 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:38.224 [2024-11-27 09:42:53.493319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.224 [2024-11-27 09:42:53.493347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.224 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:38.224 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:38.224 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:38.224 "params": { 00:10:38.224 "name": "Nvme1", 00:10:38.224 "trtype": "tcp", 00:10:38.225 "traddr": "10.0.0.2", 00:10:38.225 "adrfam": "ipv4", 00:10:38.225 "trsvcid": "4420", 00:10:38.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:38.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:38.225 "hdgst": false, 00:10:38.225 "ddgst": false 00:10:38.225 }, 00:10:38.225 "method": "bdev_nvme_attach_controller" 00:10:38.225 }' 00:10:38.225 [2024-11-27 09:42:53.505331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.505346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.517351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.517360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.529392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.529400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.533820] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:10:38.225 [2024-11-27 09:42:53.533873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3726643 ] 00:10:38.225 [2024-11-27 09:42:53.541416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.541429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.553443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.553451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.565475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.565483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.577506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.577513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.589536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.589543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.601567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.601574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.613596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.613604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.617880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.225 [2024-11-27 09:42:53.625628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.625637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.637658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.637667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.646926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.225 [2024-11-27 09:42:53.649690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.649698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.661725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.661736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.673754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.673766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.225 [2024-11-27 09:42:53.685782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.225 [2024-11-27 09:42:53.685794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.697813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.697822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.709843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.709851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.721881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.721896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.733910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.733925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.745945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.745956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.757975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.757987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.770006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.770018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.818506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.818523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.830168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.830181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 Running I/O for 5 seconds... 00:10:38.486 [2024-11-27 09:42:53.846024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.846041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.858966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.858987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.872482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.872498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.885896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.885913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.899590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.899608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.912770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.912787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.925989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.926008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.486 [2024-11-27 09:42:53.939508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.486 [2024-11-27 09:42:53.939528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:53.953228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:53.953245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:53.966556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:53.966572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:53.979932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:53.979950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:53.992465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:53.992486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.005826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.005842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.019488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.019509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.032771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.032787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.045770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.045787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.059119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.059137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.071773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.071790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.085118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.085135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.098388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.098405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.111307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.111325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.124251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.124268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.137710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.137727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.150500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.150520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.162741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.162757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.175811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.175828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.189410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.189430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.748 [2024-11-27 09:42:54.202742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.748 [2024-11-27 09:42:54.202759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.216241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.216260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.229347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.229365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.242779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.242796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.255641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.255658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.268139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.268167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.281451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.281468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.295359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.295376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.309043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.309060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.322217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.322235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.335345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.335362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.349107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.349124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.362535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.362551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.375222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.375238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.388010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.388026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.401585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.401602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.414441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.414458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.427997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.428014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.441430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.441447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.454779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.454796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.011 [2024-11-27 09:42:54.467974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.011 [2024-11-27 09:42:54.467991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.481838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.481855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.494940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.494961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.508372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.508389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.521525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.521542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.534194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.534212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.547348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.547365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.560898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.560915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.573647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.573666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.585926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.585943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.599560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.599577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.612883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.612901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.626377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.626393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.639407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.639423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.653271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.653289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.666243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.666259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.679611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.679626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.693253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.693270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.706227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.706248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.719698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.719714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.274 [2024-11-27 09:42:54.733053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.274 [2024-11-27 09:42:54.733069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.536 [2024-11-27 09:42:54.745364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.536 [2024-11-27 09:42:54.745383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.536 [2024-11-27 09:42:54.758451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.536 [2024-11-27 09:42:54.758467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.536 [2024-11-27 09:42:54.771887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.536 [2024-11-27 09:42:54.771904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.536 [2024-11-27 09:42:54.785544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.536 [2024-11-27 09:42:54.785564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.536 [2024-11-27 09:42:54.798692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.536 [2024-11-27 09:42:54.798708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.536 [2024-11-27 09:42:54.812191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.536 [2024-11-27 09:42:54.812210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.536 [2024-11-27 09:42:54.825369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.536 [2024-11-27 09:42:54.825387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.536 18681.00 IOPS, 145.95 MiB/s [2024-11-27T08:42:55.002Z] [2024-11-27 09:42:54.838733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.536 [2024-11-27 09:42:54.838749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.536 [2024-11-27 09:42:54.852046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.536 [2024-11-27 09:42:54.852061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.537 [2024-11-27 09:42:54.865431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.537 [2024-11-27 09:42:54.865446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.537 [2024-11-27 09:42:54.878646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.537 [2024-11-27 09:42:54.878664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.537 [2024-11-27 09:42:54.892008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.537 [2024-11-27 09:42:54.892023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.537 [2024-11-27 09:42:54.904571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.537 [2024-11-27 09:42:54.904587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.537 [2024-11-27 09:42:54.918186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.537 [2024-11-27 09:42:54.918202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.537 [2024-11-27 09:42:54.931441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.537 [2024-11-27 09:42:54.931457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.537 [2024-11-27 09:42:54.944653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.537 [2024-11-27 09:42:54.944669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.537 [2024-11-27 09:42:54.957773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.537 [2024-11-27 09:42:54.957789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.537 [2024-11-27 09:42:54.971198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.537 [2024-11-27 09:42:54.971214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.537 [2024-11-27 09:42:54.984736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.537 [2024-11-27 09:42:54.984752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.537 [2024-11-27 09:42:54.997576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.537 [2024-11-27 09:42:54.997592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.010640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.010664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.023818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.023834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.036454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.036470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.049812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.049828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.063138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.063154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.076548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.076566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.089304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.089320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.102007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.102024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.115358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.115375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.128539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.128555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.141767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.141782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.155255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.155272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.168830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.168846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.181442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.181457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.194708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.194724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.208227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.208242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.221278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.221295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.234755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.234774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.247535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.247550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.798 [2024-11-27 09:42:55.261047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.798 [2024-11-27 09:42:55.261067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.059 [2024-11-27 09:42:55.274417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.059 [2024-11-27 09:42:55.274433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.059 [2024-11-27 09:42:55.287759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.059 [2024-11-27 09:42:55.287776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.059 [2024-11-27 09:42:55.301118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.059 [2024-11-27 09:42:55.301135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.059 [2024-11-27 09:42:55.314662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.059 [2024-11-27 09:42:55.314678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.059 [2024-11-27 09:42:55.327544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.059 [2024-11-27 09:42:55.327560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.059 [2024-11-27 09:42:55.340544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.059 [2024-11-27 09:42:55.340559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.059 [2024-11-27 09:42:55.354089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.059 [2024-11-27 09:42:55.354105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.059 [2024-11-27 09:42:55.367208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.059 [2024-11-27 09:42:55.367224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.059 [2024-11-27 09:42:55.380485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.059 [2024-11-27 09:42:55.380502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.059 [2024-11-27 09:42:55.394127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.059 [2024-11-27 09:42:55.394146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.059 [2024-11-27 09:42:55.407357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.059 [2024-11-27 09:42:55.407373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.059 [2024-11-27 09:42:55.420296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.059 [2024-11-27 09:42:55.420312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.059 [2024-11-27 09:42:55.433188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.060 [2024-11-27 09:42:55.433204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.060 [2024-11-27 09:42:55.446880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.060 [2024-11-27 09:42:55.446896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.060 [2024-11-27 09:42:55.460265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.060 [2024-11-27 09:42:55.460283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.060 [2024-11-27 09:42:55.473815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.060 [2024-11-27 09:42:55.473831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.060 [2024-11-27 09:42:55.486614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.060 [2024-11-27 09:42:55.486630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.060 [2024-11-27 09:42:55.500165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.060 [2024-11-27 09:42:55.500182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.060 [2024-11-27 09:42:55.513486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.060 [2024-11-27 09:42:55.513506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.321 [2024-11-27 09:42:55.527066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.321 [2024-11-27 09:42:55.527083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.321 [2024-11-27 09:42:55.539667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.321 [2024-11-27 09:42:55.539684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.321 [2024-11-27 09:42:55.552868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.321 [2024-11-27 09:42:55.552885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.321 [2024-11-27 09:42:55.566298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.321 [2024-11-27 09:42:55.566313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.321 [2024-11-27 09:42:55.579325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.321 [2024-11-27 09:42:55.579344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.321 [2024-11-27 09:42:55.592888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.321 [2024-11-27 09:42:55.592904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.321 [2024-11-27 09:42:55.606186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.321 [2024-11-27 09:42:55.606202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.321 [2024-11-27 09:42:55.619656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.321 [2024-11-27 09:42:55.619676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.321 [2024-11-27 09:42:55.632328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.321 [2024-11-27 09:42:55.632343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.321 [2024-11-27 09:42:55.645371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.321 [2024-11-27 09:42:55.645387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.321 [2024-11-27 09:42:55.658609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.322 [2024-11-27 09:42:55.658625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.322 [2024-11-27 09:42:55.671979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.322 [2024-11-27 09:42:55.671996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.322 [2024-11-27 09:42:55.685622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.322 [2024-11-27 09:42:55.685638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.322 [2024-11-27 09:42:55.698482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.322 [2024-11-27 09:42:55.698499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.322 [2024-11-27 09:42:55.710834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.322 [2024-11-27 09:42:55.710851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.322 [2024-11-27 09:42:55.724511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.322 [2024-11-27 09:42:55.724528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.322 [2024-11-27 09:42:55.737805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.322 [2024-11-27 09:42:55.737822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.322 [2024-11-27 09:42:55.751059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.322 [2024-11-27 09:42:55.751078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.322 [2024-11-27 09:42:55.764179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.322 [2024-11-27 09:42:55.764202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.322 [2024-11-27 09:42:55.777688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.322 [2024-11-27 09:42:55.777704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.790395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.790411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.803903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.803922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.817186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.817203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.830529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.830545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 18812.50 IOPS, 146.97 MiB/s [2024-11-27T08:42:56.050Z] [2024-11-27 09:42:55.843649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.843665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.857174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.857190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.870560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.870577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.883888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.883905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.897337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.897355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.910634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.910650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.923863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.923880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.937592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.937608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.950624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.950641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.964002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.964018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.977476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.977493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:55.990950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:55.990967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:56.004081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:56.004098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:56.017577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:56.017594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:56.030750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:56.030766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.584 [2024-11-27 09:42:56.043516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.584 [2024-11-27 09:42:56.043534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.845 [2024-11-27 09:42:56.056590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.845 [2024-11-27 09:42:56.056607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.845 [2024-11-27 09:42:56.069934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.845 [2024-11-27 09:42:56.069950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.845 [2024-11-27 09:42:56.083223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.845 [2024-11-27 09:42:56.083240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.845 [2024-11-27 09:42:56.096824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.845 [2024-11-27 09:42:56.096844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.845 [2024-11-27 09:42:56.110120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.845 [2024-11-27 09:42:56.110137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.845 [2024-11-27 09:42:56.123734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.845 [2024-11-27 09:42:56.123751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.845 [2024-11-27 09:42:56.136765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.845 [2024-11-27 09:42:56.136782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.845 [2024-11-27 09:42:56.150393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.845 [2024-11-27 09:42:56.150410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.846 [2024-11-27 09:42:56.163929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.846 [2024-11-27 09:42:56.163946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.846 [2024-11-27 09:42:56.177563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.846 [2024-11-27 09:42:56.177580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.846 [2024-11-27 09:42:56.191002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.846 [2024-11-27 09:42:56.191019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.846 [2024-11-27 09:42:56.204553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.846 [2024-11-27 09:42:56.204569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.846 [2024-11-27 09:42:56.217985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.846 [2024-11-27 09:42:56.218003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.846 [2024-11-27 09:42:56.231193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.846 [2024-11-27 09:42:56.231212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.846 [2024-11-27 09:42:56.244680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.846 [2024-11-27 09:42:56.244697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.846 [2024-11-27 09:42:56.257830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.846 [2024-11-27 09:42:56.257847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.846 [2024-11-27 09:42:56.271676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.846 [2024-11-27 09:42:56.271693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.846 [2024-11-27 09:42:56.284319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.846 [2024-11-27 09:42:56.284339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.846 [2024-11-27 09:42:56.297855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.846 [2024-11-27 09:42:56.297871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.846 [2024-11-27 09:42:56.310978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.846 [2024-11-27 09:42:56.310995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.107 [2024-11-27 09:42:56.324359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.107 [2024-11-27 09:42:56.324377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.107 [2024-11-27 09:42:56.337592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.107 [2024-11-27 09:42:56.337609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.107 [2024-11-27 09:42:56.351013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.107 [2024-11-27 09:42:56.351029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.107 [2024-11-27 09:42:56.364225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.107 [2024-11-27 09:42:56.364241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.107 [2024-11-27 09:42:56.376999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.107 [2024-11-27 09:42:56.377015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.107 [2024-11-27 09:42:56.390532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.107 [2024-11-27 09:42:56.390550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.107 [2024-11-27 09:42:56.403943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.107 [2024-11-27 09:42:56.403959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.107 [2024-11-27 09:42:56.416933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.107 [2024-11-27 09:42:56.416949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.108 [2024-11-27 09:42:56.429800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.108 [2024-11-27 09:42:56.429815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.108 [2024-11-27 09:42:56.442888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.108 [2024-11-27 09:42:56.442904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.108 [2024-11-27 09:42:56.455738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.108 [2024-11-27 09:42:56.455755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.108 [2024-11-27 09:42:56.469183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.108 [2024-11-27 09:42:56.469199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.108 [2024-11-27 09:42:56.482525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.108 [2024-11-27 09:42:56.482541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.108 [2024-11-27 09:42:56.495862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.108 [2024-11-27 09:42:56.495877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.108 [2024-11-27 09:42:56.509394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.108 [2024-11-27 09:42:56.509410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.108 [2024-11-27 09:42:56.522214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.108 [2024-11-27 09:42:56.522233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.108 [2024-11-27 09:42:56.534892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.108 [2024-11-27 09:42:56.534918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.108 [2024-11-27 09:42:56.548367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.108 [2024-11-27 09:42:56.548387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.108 [2024-11-27 09:42:56.561799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.108 [2024-11-27 09:42:56.561816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.575079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.575099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.588534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.588550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.602201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.602217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.615456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.615472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.628511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.628527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.641813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.641829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.655496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.655513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.668229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.668245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.680810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.680826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.694045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.694064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.707386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.707402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.720033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.720049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.732600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.732616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.745882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.745898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.759275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.759297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.772440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.772456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.785709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.785725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.799228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.799244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.812407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.812424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.369 [2024-11-27 09:42:56.825630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.369 [2024-11-27 09:42:56.825646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.631 [2024-11-27 09:42:56.839416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.631 [2024-11-27 09:42:56.839433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.631 18826.00 IOPS, 147.08 MiB/s [2024-11-27T08:42:57.097Z] [2024-11-27 09:42:56.852832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.631 [2024-11-27 09:42:56.852847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.631 [2024-11-27 09:42:56.867017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.631 [2024-11-27 09:42:56.867035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.631 [2024-11-27 09:42:56.879677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.631 [2024-11-27 09:42:56.879693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:56.893375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:56.893395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:56.906734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:56.906750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:56.920140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:56.920162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:56.933319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:56.933336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:56.946186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:56.946202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:56.959615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:56.959630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:56.972839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:56.972856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:56.986009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:56.986025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:56.999490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:56.999505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:57.012757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:57.012780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:57.026088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:57.026106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:57.039356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:57.039372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:57.052292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:57.052308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:57.065744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:57.065763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:57.079057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:57.079076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.632 [2024-11-27 09:42:57.092542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.632 [2024-11-27 09:42:57.092558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.104843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.104860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.118227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.118244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.131611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.131627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.144265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.144281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.157847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.157863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.171238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.171255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.184423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.184439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.197816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.197832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.210484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.210499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.223446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.223462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.236922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.236938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.250555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.250571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.264110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.264131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.277720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.277736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.290628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.290645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.303660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.303676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.316813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.316833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.330361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.330381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.344092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.344108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.893 [2024-11-27 09:42:57.356862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.893 [2024-11-27 09:42:57.356879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.369682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.369699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.382966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.382982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.396088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.396105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.409289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.409306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.422756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.422774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.435985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.436002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.448992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.449008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.462669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.462686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.475873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.475888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.489240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.489256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.502672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.502692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.515674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.515691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.528412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.528429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.541006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.541027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.554121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.554138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.566874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.566891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.579914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.579930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.593117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.593133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.606408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.606425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.618968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.618985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.176 [2024-11-27 09:42:57.632739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.176 [2024-11-27 09:42:57.632755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.645530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.645547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.658117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.658134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.671016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.671033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.684603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.684619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.697867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.697883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.711852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.711869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.724092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.724110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.737270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.737286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.750625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.750646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.763576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.763593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.777109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.777125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.790530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.790549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.803859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.803877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.817522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.817538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.830853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.830869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 18834.00 IOPS, 147.14 MiB/s [2024-11-27T08:42:57.910Z] [2024-11-27 09:42:57.844098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.844114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.857669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.857686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.870119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.870135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.883131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.883147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.444 [2024-11-27 09:42:57.896572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.444 [2024-11-27 09:42:57.896588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:57.909989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:57.910009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:57.923223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:57.923240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:57.936670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:57.936686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:57.949476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:57.949493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:57.963006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:57.963022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:57.975797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:57.975813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:57.988962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:57.988978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:58.001894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:58.001915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:58.014873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:58.014891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:58.028245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:58.028262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:58.041318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:58.041338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:58.054875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:58.054892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:58.068251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:58.068267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:58.081454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:58.081470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:58.094986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:58.095005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:58.107882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:58.107899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:58.120402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:58.120418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:58.134340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:58.134356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:58.147278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:58.147293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.766 [2024-11-27 09:42:58.160478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.766 [2024-11-27 09:42:58.160494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.767 [2024-11-27 09:42:58.173936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.767 [2024-11-27 09:42:58.173952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.767 [2024-11-27 09:42:58.186697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.767 [2024-11-27 09:42:58.186714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.767 [2024-11-27 09:42:58.199581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.767 [2024-11-27 09:42:58.199597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.767 [2024-11-27 09:42:58.212528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.767 [2024-11-27 09:42:58.212544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.084 [2024-11-27 09:42:58.224654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.084 [2024-11-27 09:42:58.224671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.084 [2024-11-27 09:42:58.237721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.084 [2024-11-27 09:42:58.237737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.084 [2024-11-27 09:42:58.250409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.084 [2024-11-27 09:42:58.250433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.084 [2024-11-27 09:42:58.263774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.084 [2024-11-27 09:42:58.263790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.084 [2024-11-27 09:42:58.277018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.084 [2024-11-27 09:42:58.277033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.290460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.290476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.303791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.303807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.317152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.317176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.330504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.330519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.343931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.343947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.356754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.356771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.370031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.370047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.383670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.383686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.397255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.397271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.410538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.410554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.423585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.423604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.436585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.436601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.449828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.449848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.463153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.463173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.476492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.476508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.489703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.489719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.502972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.502992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.516620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.516636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.085 [2024-11-27 09:42:58.530001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.085 [2024-11-27 09:42:58.530018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.543474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.543493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.556646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.556663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.570070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.570086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.583377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.583396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.596851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.596867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.610071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.610087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.623411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.623426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.636711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.636727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.650311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.650327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.663911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.663926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.677135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.677152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.690511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.690527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.704033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.704049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.717470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.717485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.730110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.730126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.743344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.743362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.756180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.756207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.769631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.769651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.782416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.782432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.795522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.795538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.347 [2024-11-27 09:42:58.808791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.347 [2024-11-27 09:42:58.808811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.609 [2024-11-27 09:42:58.822243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.609 [2024-11-27 09:42:58.822259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.610 [2024-11-27 09:42:58.834638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.610 [2024-11-27 09:42:58.834658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.610 18834.20 IOPS, 147.14 MiB/s [2024-11-27T08:42:59.076Z] [2024-11-27 09:42:58.847169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.610 [2024-11-27 09:42:58.847185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.610 00:10:43.610 Latency(us) 00:10:43.610 [2024-11-27T08:42:59.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.610 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:43.610 Nvme1n1 : 5.01 18838.84 147.18 0.00 0.00 6788.40 2525.87 12779.52 00:10:43.610 [2024-11-27T08:42:59.076Z] =================================================================================================================== 00:10:43.610 [2024-11-27T08:42:59.076Z] Total : 18838.84 147.18 0.00 0.00 6788.40 2525.87 12779.52 00:10:43.610 [2024-11-27 09:42:58.856771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.610 [2024-11-27 09:42:58.856786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.610 [2024-11-27 09:42:58.868802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.610 [2024-11-27 09:42:58.868816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.610 [2024-11-27 09:42:58.880834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.610 [2024-11-27 09:42:58.880846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.610 [2024-11-27 09:42:58.892866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.610 [2024-11-27 09:42:58.892879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.610 [2024-11-27 09:42:58.904895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.610 [2024-11-27 09:42:58.904905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.610 [2024-11-27 09:42:58.916921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.610 [2024-11-27 09:42:58.916932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.610 [2024-11-27 09:42:58.928951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.610 [2024-11-27 09:42:58.928960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.610 [2024-11-27 09:42:58.940981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.610 [2024-11-27 09:42:58.940990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.610 [2024-11-27 09:42:58.953011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.610 [2024-11-27 09:42:58.953020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3726643) - No such process 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3726643 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.610 delay0 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.610 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:43.872 [2024-11-27 09:42:59.133339] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:52.015 [2024-11-27 09:43:06.216921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee680 is same with the state(6) to be set 00:10:52.015 [2024-11-27 09:43:06.216958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ee680 is same with the state(6) to be set 00:10:52.015 Initializing NVMe Controllers 00:10:52.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:52.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:52.015 Initialization complete. Launching workers. 00:10:52.015 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 230, failed: 34476 00:10:52.015 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 34578, failed to submit 128 00:10:52.015 success 34512, unsuccessful 66, failed 0 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.015 rmmod nvme_tcp 00:10:52.015 rmmod nvme_fabrics 00:10:52.015 rmmod nvme_keyring 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3724277 ']' 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3724277 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3724277 ']' 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3724277 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724277 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724277' 00:10:52.015 killing process with pid 3724277 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3724277 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3724277 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.015 09:43:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.401 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:53.401 00:10:53.401 real 0m34.519s 00:10:53.401 user 0m45.323s 00:10:53.401 sys 0m11.741s 00:10:53.401 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.401 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.401 ************************************ 00:10:53.401 END TEST nvmf_zcopy 00:10:53.402 ************************************ 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:53.402 ************************************ 00:10:53.402 START TEST nvmf_nmic 00:10:53.402 ************************************ 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:53.402 * Looking for test storage... 00:10:53.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:53.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.402 --rc genhtml_branch_coverage=1 00:10:53.402 --rc genhtml_function_coverage=1 00:10:53.402 --rc genhtml_legend=1 00:10:53.402 --rc geninfo_all_blocks=1 00:10:53.402 --rc geninfo_unexecuted_blocks=1 00:10:53.402 00:10:53.402 ' 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:53.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.402 --rc genhtml_branch_coverage=1 00:10:53.402 --rc genhtml_function_coverage=1 00:10:53.402 --rc genhtml_legend=1 00:10:53.402 --rc geninfo_all_blocks=1 00:10:53.402 --rc geninfo_unexecuted_blocks=1 00:10:53.402 00:10:53.402 ' 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:53.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.402 --rc genhtml_branch_coverage=1 00:10:53.402 --rc genhtml_function_coverage=1 00:10:53.402 --rc genhtml_legend=1 00:10:53.402 --rc geninfo_all_blocks=1 00:10:53.402 --rc geninfo_unexecuted_blocks=1 00:10:53.402 00:10:53.402 ' 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:53.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.402 --rc genhtml_branch_coverage=1 00:10:53.402 --rc genhtml_function_coverage=1 00:10:53.402 --rc genhtml_legend=1 00:10:53.402 --rc geninfo_all_blocks=1 00:10:53.402 --rc geninfo_unexecuted_blocks=1 00:10:53.402 00:10:53.402 ' 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.402 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.403 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.403 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.403 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.403 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.403 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:53.403 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:53.665 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:53.665 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.665 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.665 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.665 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.665 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.665 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.665 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.665 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.665 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:53.665 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:53.665 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:53.665 09:43:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:01.814 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:01.814 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:01.814 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:01.814 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.814 09:43:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.814 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.814 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.814 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:01.814 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.814 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.814 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.814 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:01.814 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:01.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:11:01.814 00:11:01.814 --- 10.0.0.2 ping statistics --- 00:11:01.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.814 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:11:01.815 00:11:01.815 --- 10.0.0.1 ping statistics --- 00:11:01.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.815 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3733341 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3733341 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3733341 ']' 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.815 09:43:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 [2024-11-27 09:43:16.320704] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:11:01.815 [2024-11-27 09:43:16.320797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.815 [2024-11-27 09:43:16.420657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.815 [2024-11-27 09:43:16.475550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.815 [2024-11-27 09:43:16.475604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.815 [2024-11-27 09:43:16.475613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.815 [2024-11-27 09:43:16.475620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.815 [2024-11-27 09:43:16.475626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.815 [2024-11-27 09:43:16.477649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.815 [2024-11-27 09:43:16.477811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.815 [2024-11-27 09:43:16.477972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.815 [2024-11-27 09:43:16.477972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 [2024-11-27 09:43:17.162259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 Malloc0 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 [2024-11-27 09:43:17.232431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:01.815 test case1: single bdev can't be used in multiple subsystems 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 [2024-11-27 09:43:17.268375] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:01.815 [2024-11-27 09:43:17.268395] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:01.815 [2024-11-27 09:43:17.268403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.815 request: 00:11:01.815 { 00:11:01.815 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:01.815 "namespace": { 00:11:01.815 "bdev_name": "Malloc0", 00:11:01.815 "no_auto_visible": false 00:11:01.815 }, 00:11:01.815 "method": "nvmf_subsystem_add_ns", 00:11:01.815 "req_id": 1 00:11:01.815 } 00:11:01.815 Got JSON-RPC error response 00:11:01.815 response: 00:11:01.815 { 00:11:01.815 "code": -32602, 00:11:01.815 "message": "Invalid parameters" 00:11:01.815 } 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:01.815 Adding namespace failed - expected result. 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:01.815 test case2: host connect to nvmf target in multiple paths 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.815 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:02.076 [2024-11-27 09:43:17.280505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:02.076 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.076 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.456 09:43:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:04.835 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:04.835 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:04.835 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.835 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:04.835 09:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:07.372 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:07.372 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:07.372 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.372 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:07.372 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.372 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:07.372 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:07.372 [global] 00:11:07.372 thread=1 00:11:07.372 invalidate=1 00:11:07.372 rw=write 00:11:07.372 time_based=1 00:11:07.372 runtime=1 00:11:07.372 ioengine=libaio 00:11:07.372 direct=1 00:11:07.372 bs=4096 00:11:07.372 iodepth=1 00:11:07.372 norandommap=0 00:11:07.372 numjobs=1 00:11:07.372 00:11:07.372 verify_dump=1 00:11:07.372 verify_backlog=512 00:11:07.372 verify_state_save=0 00:11:07.372 do_verify=1 00:11:07.372 verify=crc32c-intel 00:11:07.372 [job0] 00:11:07.372 filename=/dev/nvme0n1 00:11:07.372 Could not set queue depth (nvme0n1) 00:11:07.372 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.372 fio-3.35 00:11:07.372 Starting 1 thread 00:11:08.755 00:11:08.755 job0: (groupid=0, jobs=1): err= 0: pid=3734885: Wed Nov 27 09:43:23 2024 00:11:08.755 read: IOPS=17, BW=70.6KiB/s (72.3kB/s)(72.0KiB/1020msec) 00:11:08.755 slat (nsec): min=7017, max=27324, avg=24411.78, stdev=5824.20 00:11:08.755 clat (usec): min=965, max=42014, avg=39309.04, stdev=9580.30 00:11:08.755 lat (usec): min=975, max=42040, avg=39333.45, stdev=9583.93 00:11:08.755 clat percentiles (usec): 00:11:08.755 | 1.00th=[ 963], 5.00th=[ 963], 10.00th=[41157], 20.00th=[41157], 00:11:08.755 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:11:08.755 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:08.755 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:08.755 | 99.99th=[42206] 00:11:08.755 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:11:08.755 slat (nsec): min=9187, max=66254, avg=29137.09, stdev=10021.07 00:11:08.755 clat (usec): min=252, max=777, avg=573.53, stdev=90.73 00:11:08.755 lat (usec): min=286, max=825, avg=602.67, stdev=95.36 00:11:08.755 clat percentiles (usec): 00:11:08.755 | 1.00th=[ 338], 5.00th=[ 408], 10.00th=[ 445], 20.00th=[ 498], 00:11:08.755 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 578], 60.00th=[ 594], 00:11:08.755 | 70.00th=[ 635], 80.00th=[ 660], 90.00th=[ 676], 95.00th=[ 701], 00:11:08.755 | 99.00th=[ 742], 99.50th=[ 750], 99.90th=[ 775], 99.95th=[ 775], 00:11:08.755 | 99.99th=[ 775] 00:11:08.755 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:08.755 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:08.755 lat (usec) : 500=20.19%, 750=75.85%, 1000=0.75% 00:11:08.755 lat (msec) : 50=3.21% 00:11:08.755 cpu : usr=0.69%, sys=2.16%, ctx=530, majf=0, minf=1 00:11:08.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.755 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.755 00:11:08.755 Run status group 0 (all jobs): 00:11:08.755 READ: bw=70.6KiB/s (72.3kB/s), 70.6KiB/s-70.6KiB/s (72.3kB/s-72.3kB/s), io=72.0KiB (73.7kB), run=1020-1020msec 00:11:08.755 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:11:08.755 00:11:08.755 Disk stats (read/write): 00:11:08.755 nvme0n1: ios=65/512, merge=0/0, ticks=808/241, in_queue=1049, util=95.79% 00:11:08.755 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:08.755 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.755 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:08.755 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:08.755 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.755 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:08.755 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.755 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:08.755 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.756 rmmod nvme_tcp 00:11:08.756 rmmod nvme_fabrics 00:11:08.756 rmmod nvme_keyring 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3733341 ']' 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3733341 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3733341 ']' 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3733341 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3733341 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3733341' 00:11:08.756 killing process with pid 3733341 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3733341 00:11:08.756 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3733341 00:11:09.016 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:09.016 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:09.016 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:09.016 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:09.016 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:09.016 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:09.016 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:09.016 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:09.016 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:09.016 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.016 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.016 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.928 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:10.928 00:11:10.928 real 0m17.749s 00:11:10.928 user 0m46.447s 00:11:10.928 sys 0m6.501s 00:11:10.928 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.928 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.928 ************************************ 00:11:10.928 END TEST nvmf_nmic 00:11:10.928 ************************************ 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:11.189 ************************************ 00:11:11.189 START TEST nvmf_fio_target 00:11:11.189 ************************************ 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:11.189 * Looking for test storage... 00:11:11.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.189 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:11.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.450 --rc genhtml_branch_coverage=1 00:11:11.450 --rc genhtml_function_coverage=1 00:11:11.450 --rc genhtml_legend=1 00:11:11.450 --rc geninfo_all_blocks=1 00:11:11.450 --rc geninfo_unexecuted_blocks=1 00:11:11.450 00:11:11.450 ' 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:11.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.450 --rc genhtml_branch_coverage=1 00:11:11.450 --rc genhtml_function_coverage=1 00:11:11.450 --rc genhtml_legend=1 00:11:11.450 --rc geninfo_all_blocks=1 00:11:11.450 --rc geninfo_unexecuted_blocks=1 00:11:11.450 00:11:11.450 ' 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:11.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.450 --rc genhtml_branch_coverage=1 00:11:11.450 --rc genhtml_function_coverage=1 00:11:11.450 --rc genhtml_legend=1 00:11:11.450 --rc geninfo_all_blocks=1 00:11:11.450 --rc geninfo_unexecuted_blocks=1 00:11:11.450 00:11:11.450 ' 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:11.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.450 --rc genhtml_branch_coverage=1 00:11:11.450 --rc genhtml_function_coverage=1 00:11:11.450 --rc genhtml_legend=1 00:11:11.450 --rc geninfo_all_blocks=1 00:11:11.450 --rc geninfo_unexecuted_blocks=1 00:11:11.450 00:11:11.450 ' 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:11.450 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:11.451 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.451 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:19.588 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.588 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:19.589 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:19.589 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:19.589 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.589 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:19.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:11:19.589 00:11:19.589 --- 10.0.0.2 ping statistics --- 00:11:19.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.589 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:11:19.589 00:11:19.589 --- 10.0.0.1 ping statistics --- 00:11:19.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.589 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3739447 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3739447 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3739447 ']' 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.589 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.589 [2024-11-27 09:43:34.351113] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:11:19.589 [2024-11-27 09:43:34.351188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.589 [2024-11-27 09:43:34.452849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.589 [2024-11-27 09:43:34.507639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.590 [2024-11-27 09:43:34.507694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.590 [2024-11-27 09:43:34.507703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.590 [2024-11-27 09:43:34.507710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.590 [2024-11-27 09:43:34.507716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.590 [2024-11-27 09:43:34.510114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.590 [2024-11-27 09:43:34.510275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.590 [2024-11-27 09:43:34.510324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.590 [2024-11-27 09:43:34.510324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.851 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.851 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:19.851 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.851 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.851 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.851 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.851 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:20.112 [2024-11-27 09:43:35.387331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.112 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.372 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:20.372 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.633 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:20.633 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.633 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:20.633 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.895 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:20.895 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:21.155 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.416 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:21.416 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.676 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:21.676 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.676 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:21.676 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:21.936 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:22.196 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:22.196 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.456 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:22.456 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:22.457 09:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.717 [2024-11-27 09:43:38.025880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.717 09:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:22.977 09:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:22.977 09:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.886 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:24.886 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:24.886 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.886 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:24.886 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:24.886 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:26.808 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:26.808 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:26.808 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.808 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:26.808 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.808 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:26.808 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:26.808 [global] 00:11:26.808 thread=1 00:11:26.809 invalidate=1 00:11:26.809 rw=write 00:11:26.809 time_based=1 00:11:26.809 runtime=1 00:11:26.809 ioengine=libaio 00:11:26.809 direct=1 00:11:26.809 bs=4096 00:11:26.809 iodepth=1 00:11:26.809 norandommap=0 00:11:26.809 numjobs=1 00:11:26.809 00:11:26.809 verify_dump=1 00:11:26.809 verify_backlog=512 00:11:26.809 verify_state_save=0 00:11:26.809 do_verify=1 00:11:26.809 verify=crc32c-intel 00:11:26.809 [job0] 00:11:26.809 filename=/dev/nvme0n1 00:11:26.809 [job1] 00:11:26.809 filename=/dev/nvme0n2 00:11:26.809 [job2] 00:11:26.809 filename=/dev/nvme0n3 00:11:26.809 [job3] 00:11:26.809 filename=/dev/nvme0n4 00:11:26.809 Could not set queue depth (nvme0n1) 00:11:26.809 Could not set queue depth (nvme0n2) 00:11:26.809 Could not set queue depth (nvme0n3) 00:11:26.809 Could not set queue depth (nvme0n4) 00:11:27.069 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.069 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.069 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.069 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.069 fio-3.35 00:11:27.069 Starting 4 threads 00:11:28.455 00:11:28.455 job0: (groupid=0, jobs=1): err= 0: pid=3741170: Wed Nov 27 09:43:43 2024 00:11:28.455 read: IOPS=18, BW=75.5KiB/s (77.3kB/s)(76.0KiB/1007msec) 00:11:28.455 slat (nsec): min=7284, max=26407, avg=16077.68, stdev=8022.19 00:11:28.455 clat (usec): min=970, max=42098, avg=39219.57, stdev=9273.09 00:11:28.455 lat (usec): min=978, max=42124, avg=39235.64, stdev=9275.43 00:11:28.455 clat percentiles (usec): 00:11:28.455 | 1.00th=[ 971], 5.00th=[ 971], 10.00th=[40633], 20.00th=[41157], 00:11:28.455 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:28.455 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:28.455 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:28.455 | 99.99th=[42206] 00:11:28.455 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:11:28.455 slat (nsec): min=6389, max=61093, avg=13682.57, stdev=10162.79 00:11:28.455 clat (usec): min=227, max=846, avg=492.51, stdev=109.12 00:11:28.455 lat (usec): min=235, max=882, avg=506.20, stdev=114.83 00:11:28.455 clat percentiles (usec): 00:11:28.455 | 1.00th=[ 262], 5.00th=[ 343], 10.00th=[ 367], 20.00th=[ 400], 00:11:28.455 | 30.00th=[ 441], 40.00th=[ 465], 50.00th=[ 482], 60.00th=[ 502], 00:11:28.455 | 70.00th=[ 529], 80.00th=[ 578], 90.00th=[ 652], 95.00th=[ 701], 00:11:28.455 | 99.00th=[ 791], 99.50th=[ 816], 99.90th=[ 848], 99.95th=[ 848], 00:11:28.455 | 99.99th=[ 848] 00:11:28.455 bw ( KiB/s): min= 4096, max= 4096, per=40.59%, avg=4096.00, stdev= 0.00, samples=1 00:11:28.455 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:28.455 lat (usec) : 250=0.56%, 500=56.31%, 750=37.48%, 1000=2.26% 00:11:28.455 lat (msec) : 50=3.39% 00:11:28.455 cpu : usr=0.60%, sys=0.40%, ctx=533, majf=0, minf=1 00:11:28.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.455 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.455 job1: (groupid=0, jobs=1): err= 0: pid=3741174: Wed Nov 27 09:43:43 2024 00:11:28.455 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:28.455 slat (nsec): min=6675, max=62458, avg=27939.68, stdev=3653.67 00:11:28.455 clat (usec): min=579, max=1220, avg=970.53, stdev=77.83 00:11:28.455 lat (usec): min=587, max=1248, avg=998.47, stdev=78.43 00:11:28.455 clat percentiles (usec): 00:11:28.455 | 1.00th=[ 758], 5.00th=[ 824], 10.00th=[ 881], 20.00th=[ 922], 00:11:28.455 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 988], 00:11:28.455 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:11:28.455 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1221], 99.95th=[ 1221], 00:11:28.455 | 99.99th=[ 1221] 00:11:28.455 write: IOPS=763, BW=3053KiB/s (3126kB/s)(3056KiB/1001msec); 0 zone resets 00:11:28.455 slat (nsec): min=9437, max=70444, avg=31660.55, stdev=11657.41 00:11:28.455 clat (usec): min=194, max=2293, avg=595.09, stdev=141.24 00:11:28.455 lat (usec): min=218, max=2333, avg=626.75, stdev=146.45 00:11:28.455 clat percentiles (usec): 00:11:28.455 | 1.00th=[ 277], 5.00th=[ 383], 10.00th=[ 424], 20.00th=[ 490], 00:11:28.455 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 644], 00:11:28.455 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 734], 95.00th=[ 758], 00:11:28.455 | 99.00th=[ 832], 99.50th=[ 889], 99.90th=[ 2278], 99.95th=[ 2278], 00:11:28.455 | 99.99th=[ 2278] 00:11:28.455 bw ( KiB/s): min= 4096, max= 4096, per=40.59%, avg=4096.00, stdev= 0.00, samples=1 00:11:28.455 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:28.455 lat (usec) : 250=0.39%, 500=13.24%, 750=42.63%, 1000=29.55% 00:11:28.455 lat (msec) : 2=14.11%, 4=0.08% 00:11:28.455 cpu : usr=3.40%, sys=4.20%, ctx=1277, majf=0, minf=1 00:11:28.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.455 issued rwts: total=512,764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.455 job2: (groupid=0, jobs=1): err= 0: pid=3741180: Wed Nov 27 09:43:43 2024 00:11:28.455 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:28.455 slat (nsec): min=26245, max=46846, avg=27064.32, stdev=2594.74 00:11:28.455 clat (usec): min=743, max=1241, avg=995.18, stdev=74.70 00:11:28.455 lat (usec): min=770, max=1268, avg=1022.24, stdev=74.67 00:11:28.455 clat percentiles (usec): 00:11:28.455 | 1.00th=[ 799], 5.00th=[ 848], 10.00th=[ 906], 20.00th=[ 947], 00:11:28.455 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:11:28.455 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1106], 00:11:28.455 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:11:28.455 | 99.99th=[ 1237] 00:11:28.455 write: IOPS=797, BW=3189KiB/s (3265kB/s)(3192KiB/1001msec); 0 zone resets 00:11:28.455 slat (nsec): min=10158, max=58803, avg=29951.85, stdev=11376.53 00:11:28.455 clat (usec): min=237, max=941, avg=555.10, stdev=123.61 00:11:28.455 lat (usec): min=248, max=976, avg=585.05, stdev=127.31 00:11:28.455 clat percentiles (usec): 00:11:28.455 | 1.00th=[ 281], 5.00th=[ 343], 10.00th=[ 392], 20.00th=[ 453], 00:11:28.455 | 30.00th=[ 486], 40.00th=[ 519], 50.00th=[ 553], 60.00th=[ 594], 00:11:28.455 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 709], 95.00th=[ 750], 00:11:28.455 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 938], 99.95th=[ 938], 00:11:28.455 | 99.99th=[ 938] 00:11:28.455 bw ( KiB/s): min= 4096, max= 4096, per=40.59%, avg=4096.00, stdev= 0.00, samples=1 00:11:28.455 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:28.455 lat (usec) : 250=0.08%, 500=20.99%, 750=36.79%, 1000=22.21% 00:11:28.455 lat (msec) : 2=19.92% 00:11:28.455 cpu : usr=2.10%, sys=3.70%, ctx=1311, majf=0, minf=1 00:11:28.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.455 issued rwts: total=512,798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.455 job3: (groupid=0, jobs=1): err= 0: pid=3741187: Wed Nov 27 09:43:43 2024 00:11:28.455 read: IOPS=218, BW=874KiB/s (895kB/s)(896KiB/1025msec) 00:11:28.455 slat (nsec): min=7708, max=63938, avg=28886.08, stdev=4075.86 00:11:28.455 clat (usec): min=891, max=41234, avg=3086.26, stdev=8649.09 00:11:28.455 lat (usec): min=919, max=41262, avg=3115.15, stdev=8649.01 00:11:28.455 clat percentiles (usec): 00:11:28.455 | 1.00th=[ 947], 5.00th=[ 1020], 10.00th=[ 1037], 20.00th=[ 1090], 00:11:28.455 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1123], 60.00th=[ 1139], 00:11:28.455 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1237], 95.00th=[ 1336], 00:11:28.455 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:28.455 | 99.99th=[41157] 00:11:28.455 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:11:28.455 slat (usec): min=4, max=2665, avg=35.42, stdev=117.19 00:11:28.455 clat (usec): min=292, max=960, avg=591.80, stdev=120.99 00:11:28.455 lat (usec): min=303, max=3625, avg=627.21, stdev=182.91 00:11:28.455 clat percentiles (usec): 00:11:28.455 | 1.00th=[ 334], 5.00th=[ 371], 10.00th=[ 441], 20.00th=[ 490], 00:11:28.455 | 30.00th=[ 523], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:11:28.455 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 791], 00:11:28.455 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 963], 99.95th=[ 963], 00:11:28.455 | 99.99th=[ 963] 00:11:28.455 bw ( KiB/s): min= 4096, max= 4096, per=40.59%, avg=4096.00, stdev= 0.00, samples=1 00:11:28.455 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:28.455 lat (usec) : 500=16.44%, 750=47.42%, 1000=6.52% 00:11:28.455 lat (msec) : 2=28.12%, 50=1.49% 00:11:28.455 cpu : usr=1.17%, sys=2.83%, ctx=738, majf=0, minf=1 00:11:28.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.456 issued rwts: total=224,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.456 00:11:28.456 Run status group 0 (all jobs): 00:11:28.456 READ: bw=4944KiB/s (5063kB/s), 75.5KiB/s-2046KiB/s (77.3kB/s-2095kB/s), io=5068KiB (5190kB), run=1001-1025msec 00:11:28.456 WRITE: bw=9.85MiB/s (10.3MB/s), 1998KiB/s-3189KiB/s (2046kB/s-3265kB/s), io=10.1MiB (10.6MB), run=1001-1025msec 00:11:28.456 00:11:28.456 Disk stats (read/write): 00:11:28.456 nvme0n1: ios=36/512, merge=0/0, ticks=1375/245, in_queue=1620, util=83.97% 00:11:28.456 nvme0n2: ios=538/512, merge=0/0, ticks=547/245, in_queue=792, util=91.11% 00:11:28.456 nvme0n3: ios=534/512, merge=0/0, ticks=1377/269, in_queue=1646, util=91.85% 00:11:28.456 nvme0n4: ios=232/512, merge=0/0, ticks=614/249, in_queue=863, util=94.11% 00:11:28.456 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:28.456 [global] 00:11:28.456 thread=1 00:11:28.456 invalidate=1 00:11:28.456 rw=randwrite 00:11:28.456 time_based=1 00:11:28.456 runtime=1 00:11:28.456 ioengine=libaio 00:11:28.456 direct=1 00:11:28.456 bs=4096 00:11:28.456 iodepth=1 00:11:28.456 norandommap=0 00:11:28.456 numjobs=1 00:11:28.456 00:11:28.456 verify_dump=1 00:11:28.456 verify_backlog=512 00:11:28.456 verify_state_save=0 00:11:28.456 do_verify=1 00:11:28.456 verify=crc32c-intel 00:11:28.456 [job0] 00:11:28.456 filename=/dev/nvme0n1 00:11:28.456 [job1] 00:11:28.456 filename=/dev/nvme0n2 00:11:28.456 [job2] 00:11:28.456 filename=/dev/nvme0n3 00:11:28.456 [job3] 00:11:28.456 filename=/dev/nvme0n4 00:11:28.456 Could not set queue depth (nvme0n1) 00:11:28.456 Could not set queue depth (nvme0n2) 00:11:28.456 Could not set queue depth (nvme0n3) 00:11:28.456 Could not set queue depth (nvme0n4) 00:11:28.716 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:28.716 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:28.716 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:28.716 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:28.716 fio-3.35 00:11:28.716 Starting 4 threads 00:11:30.100 00:11:30.100 job0: (groupid=0, jobs=1): err= 0: pid=3741692: Wed Nov 27 09:43:45 2024 00:11:30.100 read: IOPS=18, BW=75.7KiB/s (77.5kB/s)(76.0KiB/1004msec) 00:11:30.100 slat (nsec): min=27050, max=27680, avg=27406.58, stdev=196.60 00:11:30.100 clat (usec): min=40885, max=41543, avg=40996.03, stdev=141.44 00:11:30.100 lat (usec): min=40912, max=41570, avg=41023.43, stdev=141.48 00:11:30.100 clat percentiles (usec): 00:11:30.100 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:30.100 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:30.100 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:30.100 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:30.100 | 99.99th=[41681] 00:11:30.100 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:11:30.100 slat (nsec): min=9693, max=59048, avg=23008.01, stdev=12408.29 00:11:30.100 clat (usec): min=134, max=684, avg=408.88, stdev=89.35 00:11:30.100 lat (usec): min=147, max=718, avg=431.89, stdev=97.66 00:11:30.100 clat percentiles (usec): 00:11:30.100 | 1.00th=[ 241], 5.00th=[ 277], 10.00th=[ 293], 20.00th=[ 326], 00:11:30.100 | 30.00th=[ 347], 40.00th=[ 363], 50.00th=[ 412], 60.00th=[ 453], 00:11:30.100 | 70.00th=[ 474], 80.00th=[ 498], 90.00th=[ 519], 95.00th=[ 545], 00:11:30.100 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 685], 99.95th=[ 685], 00:11:30.100 | 99.99th=[ 685] 00:11:30.100 bw ( KiB/s): min= 4096, max= 4096, per=43.66%, avg=4096.00, stdev= 0.00, samples=1 00:11:30.100 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:30.100 lat (usec) : 250=1.51%, 500=78.72%, 750=16.20% 00:11:30.100 lat (msec) : 50=3.58% 00:11:30.100 cpu : usr=0.60%, sys=1.10%, ctx=534, majf=0, minf=1 00:11:30.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.100 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.100 job1: (groupid=0, jobs=1): err= 0: pid=3741698: Wed Nov 27 09:43:45 2024 00:11:30.100 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:30.100 slat (nsec): min=8278, max=45692, avg=26984.11, stdev=2970.27 00:11:30.100 clat (usec): min=645, max=1370, avg=976.01, stdev=96.93 00:11:30.100 lat (usec): min=671, max=1397, avg=1003.00, stdev=97.16 00:11:30.100 clat percentiles (usec): 00:11:30.100 | 1.00th=[ 701], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 914], 00:11:30.100 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:11:30.100 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:11:30.100 | 99.00th=[ 1172], 99.50th=[ 1287], 99.90th=[ 1369], 99.95th=[ 1369], 00:11:30.100 | 99.99th=[ 1369] 00:11:30.100 write: IOPS=848, BW=3393KiB/s (3474kB/s)(3396KiB/1001msec); 0 zone resets 00:11:30.100 slat (nsec): min=9462, max=68070, avg=29698.02, stdev=9793.64 00:11:30.100 clat (usec): min=120, max=3719, avg=529.09, stdev=202.70 00:11:30.100 lat (usec): min=130, max=3752, avg=558.79, stdev=205.59 00:11:30.100 clat percentiles (usec): 00:11:30.100 | 1.00th=[ 165], 5.00th=[ 281], 10.00th=[ 347], 20.00th=[ 396], 00:11:30.100 | 30.00th=[ 441], 40.00th=[ 486], 50.00th=[ 529], 60.00th=[ 570], 00:11:30.100 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 693], 95.00th=[ 725], 00:11:30.100 | 99.00th=[ 807], 99.50th=[ 906], 99.90th=[ 3720], 99.95th=[ 3720], 00:11:30.100 | 99.99th=[ 3720] 00:11:30.100 bw ( KiB/s): min= 4096, max= 4096, per=43.66%, avg=4096.00, stdev= 0.00, samples=1 00:11:30.100 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:30.100 lat (usec) : 250=1.32%, 500=25.86%, 750=34.02%, 1000=22.56% 00:11:30.100 lat (msec) : 2=16.09%, 4=0.15% 00:11:30.100 cpu : usr=2.20%, sys=3.90%, ctx=1363, majf=0, minf=1 00:11:30.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.100 issued rwts: total=512,849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.100 job2: (groupid=0, jobs=1): err= 0: pid=3741703: Wed Nov 27 09:43:45 2024 00:11:30.100 read: IOPS=16, BW=66.9KiB/s (68.5kB/s)(68.0KiB/1017msec) 00:11:30.100 slat (nsec): min=27337, max=27955, avg=27595.94, stdev=168.91 00:11:30.100 clat (usec): min=40896, max=41044, avg=40960.41, stdev=41.80 00:11:30.100 lat (usec): min=40923, max=41071, avg=40988.01, stdev=41.80 00:11:30.100 clat percentiles (usec): 00:11:30.100 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:30.100 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:30.100 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:30.100 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:30.100 | 99.99th=[41157] 00:11:30.100 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:11:30.100 slat (nsec): min=10138, max=59589, avg=31848.52, stdev=8969.33 00:11:30.100 clat (usec): min=227, max=945, avg=581.93, stdev=150.96 00:11:30.100 lat (usec): min=238, max=979, avg=613.78, stdev=154.21 00:11:30.100 clat percentiles (usec): 00:11:30.100 | 1.00th=[ 262], 5.00th=[ 343], 10.00th=[ 375], 20.00th=[ 433], 00:11:30.100 | 30.00th=[ 494], 40.00th=[ 545], 50.00th=[ 594], 60.00th=[ 635], 00:11:30.100 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 783], 95.00th=[ 824], 00:11:30.100 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 947], 99.95th=[ 947], 00:11:30.100 | 99.99th=[ 947] 00:11:30.100 bw ( KiB/s): min= 4096, max= 4096, per=43.66%, avg=4096.00, stdev= 0.00, samples=1 00:11:30.100 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:30.101 lat (usec) : 250=0.38%, 500=29.11%, 750=53.88%, 1000=13.42% 00:11:30.101 lat (msec) : 50=3.21% 00:11:30.101 cpu : usr=0.69%, sys=1.77%, ctx=531, majf=0, minf=1 00:11:30.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.101 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.101 job3: (groupid=0, jobs=1): err= 0: pid=3741707: Wed Nov 27 09:43:45 2024 00:11:30.101 read: IOPS=339, BW=1359KiB/s (1391kB/s)(1360KiB/1001msec) 00:11:30.101 slat (nsec): min=26788, max=58291, avg=28282.93, stdev=3504.99 00:11:30.101 clat (usec): min=770, max=42007, avg=1935.69, stdev=5725.95 00:11:30.101 lat (usec): min=798, max=42034, avg=1963.98, stdev=5725.86 00:11:30.101 clat percentiles (usec): 00:11:30.101 | 1.00th=[ 824], 5.00th=[ 930], 10.00th=[ 996], 20.00th=[ 1057], 00:11:30.101 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:11:30.101 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1254], 00:11:30.101 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:30.101 | 99.99th=[42206] 00:11:30.101 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:30.101 slat (nsec): min=9766, max=53194, avg=30084.97, stdev=10145.73 00:11:30.101 clat (usec): min=259, max=853, avg=604.10, stdev=113.84 00:11:30.101 lat (usec): min=288, max=896, avg=634.19, stdev=118.06 00:11:30.101 clat percentiles (usec): 00:11:30.101 | 1.00th=[ 347], 5.00th=[ 392], 10.00th=[ 457], 20.00th=[ 494], 00:11:30.101 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 652], 00:11:30.101 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 775], 00:11:30.101 | 99.00th=[ 840], 99.50th=[ 848], 99.90th=[ 857], 99.95th=[ 857], 00:11:30.101 | 99.99th=[ 857] 00:11:30.101 bw ( KiB/s): min= 4096, max= 4096, per=43.66%, avg=4096.00, stdev= 0.00, samples=1 00:11:30.101 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:30.101 lat (usec) : 500=12.91%, 750=42.25%, 1000=9.04% 00:11:30.101 lat (msec) : 2=34.98%, 50=0.82% 00:11:30.101 cpu : usr=1.20%, sys=2.60%, ctx=853, majf=0, minf=1 00:11:30.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.101 issued rwts: total=340,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:30.101 00:11:30.101 Run status group 0 (all jobs): 00:11:30.101 READ: bw=3493KiB/s (3576kB/s), 66.9KiB/s-2046KiB/s (68.5kB/s-2095kB/s), io=3552KiB (3637kB), run=1001-1017msec 00:11:30.101 WRITE: bw=9381KiB/s (9606kB/s), 2014KiB/s-3393KiB/s (2062kB/s-3474kB/s), io=9540KiB (9769kB), run=1001-1017msec 00:11:30.101 00:11:30.101 Disk stats (read/write): 00:11:30.101 nvme0n1: ios=42/512, merge=0/0, ticks=1411/197, in_queue=1608, util=84.37% 00:11:30.101 nvme0n2: ios=564/593, merge=0/0, ticks=715/263, in_queue=978, util=88.89% 00:11:30.101 nvme0n3: ios=73/512, merge=0/0, ticks=657/278, in_queue=935, util=92.83% 00:11:30.101 nvme0n4: ios=317/512, merge=0/0, ticks=644/308, in_queue=952, util=96.16% 00:11:30.101 09:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:30.101 [global] 00:11:30.101 thread=1 00:11:30.101 invalidate=1 00:11:30.101 rw=write 00:11:30.101 time_based=1 00:11:30.101 runtime=1 00:11:30.101 ioengine=libaio 00:11:30.101 direct=1 00:11:30.101 bs=4096 00:11:30.101 iodepth=128 00:11:30.101 norandommap=0 00:11:30.101 numjobs=1 00:11:30.101 00:11:30.101 verify_dump=1 00:11:30.101 verify_backlog=512 00:11:30.101 verify_state_save=0 00:11:30.101 do_verify=1 00:11:30.101 verify=crc32c-intel 00:11:30.101 [job0] 00:11:30.101 filename=/dev/nvme0n1 00:11:30.101 [job1] 00:11:30.101 filename=/dev/nvme0n2 00:11:30.101 [job2] 00:11:30.101 filename=/dev/nvme0n3 00:11:30.101 [job3] 00:11:30.101 filename=/dev/nvme0n4 00:11:30.101 Could not set queue depth (nvme0n1) 00:11:30.101 Could not set queue depth (nvme0n2) 00:11:30.101 Could not set queue depth (nvme0n3) 00:11:30.101 Could not set queue depth (nvme0n4) 00:11:30.361 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:30.361 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:30.361 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:30.361 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:30.361 fio-3.35 00:11:30.361 Starting 4 threads 00:11:31.748 00:11:31.748 job0: (groupid=0, jobs=1): err= 0: pid=3742207: Wed Nov 27 09:43:46 2024 00:11:31.748 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:11:31.748 slat (nsec): min=938, max=19377k, avg=74504.01, stdev=597552.90 00:11:31.748 clat (usec): min=2598, max=44661, avg=9700.00, stdev=4837.20 00:11:31.748 lat (usec): min=2605, max=44669, avg=9774.51, stdev=4876.48 00:11:31.748 clat percentiles (usec): 00:11:31.748 | 1.00th=[ 3163], 5.00th=[ 5407], 10.00th=[ 6849], 20.00th=[ 7177], 00:11:31.748 | 30.00th=[ 7635], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9110], 00:11:31.748 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[13698], 95.00th=[17433], 00:11:31.748 | 99.00th=[26608], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:11:31.748 | 99.99th=[44827] 00:11:31.748 write: IOPS=6720, BW=26.2MiB/s (27.5MB/s)(26.4MiB/1004msec); 0 zone resets 00:11:31.748 slat (nsec): min=1650, max=7720.4k, avg=66683.78, stdev=449860.81 00:11:31.748 clat (usec): min=362, max=44391, avg=9287.45, stdev=4989.88 00:11:31.748 lat (usec): min=382, max=44395, avg=9354.14, stdev=5004.96 00:11:31.748 clat percentiles (usec): 00:11:31.748 | 1.00th=[ 717], 5.00th=[ 4113], 10.00th=[ 4555], 20.00th=[ 5735], 00:11:31.748 | 30.00th=[ 6652], 40.00th=[ 7177], 50.00th=[ 8160], 60.00th=[ 9241], 00:11:31.748 | 70.00th=[10552], 80.00th=[12387], 90.00th=[15401], 95.00th=[17433], 00:11:31.748 | 99.00th=[30278], 99.50th=[30278], 99.90th=[33424], 99.95th=[33424], 00:11:31.748 | 99.99th=[44303] 00:11:31.748 bw ( KiB/s): min=24624, max=28672, per=26.51%, avg=26648.00, stdev=2862.37, samples=2 00:11:31.748 iops : min= 6156, max= 7168, avg=6662.00, stdev=715.59, samples=2 00:11:31.748 lat (usec) : 500=0.06%, 750=0.55%, 1000=0.20% 00:11:31.748 lat (msec) : 2=0.13%, 4=2.54%, 10=65.23%, 20=28.09%, 50=3.19% 00:11:31.748 cpu : usr=4.89%, sys=8.08%, ctx=414, majf=0, minf=1 00:11:31.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.748 issued rwts: total=6656,6747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.748 job1: (groupid=0, jobs=1): err= 0: pid=3742208: Wed Nov 27 09:43:46 2024 00:11:31.748 read: IOPS=3829, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1002msec) 00:11:31.748 slat (nsec): min=926, max=19770k, avg=113171.99, stdev=882050.35 00:11:31.748 clat (usec): min=911, max=68654, avg=13768.42, stdev=13073.40 00:11:31.748 lat (usec): min=1816, max=68680, avg=13881.60, stdev=13155.89 00:11:31.748 clat percentiles (usec): 00:11:31.748 | 1.00th=[ 3359], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6587], 00:11:31.748 | 30.00th=[ 7767], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9110], 00:11:31.748 | 70.00th=[10945], 80.00th=[12387], 90.00th=[41157], 95.00th=[49021], 00:11:31.748 | 99.00th=[54264], 99.50th=[57934], 99.90th=[57934], 99.95th=[64750], 00:11:31.748 | 99.99th=[68682] 00:11:31.748 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:11:31.748 slat (nsec): min=1621, max=15022k, avg=131509.65, stdev=843810.53 00:11:31.748 clat (usec): min=2245, max=81285, avg=18046.00, stdev=15810.11 00:11:31.748 lat (usec): min=2249, max=81296, avg=18177.51, stdev=15913.44 00:11:31.748 clat percentiles (usec): 00:11:31.748 | 1.00th=[ 3687], 5.00th=[ 5342], 10.00th=[ 6128], 20.00th=[ 7111], 00:11:31.748 | 30.00th=[ 7963], 40.00th=[ 8848], 50.00th=[11469], 60.00th=[16188], 00:11:31.748 | 70.00th=[21890], 80.00th=[26084], 90.00th=[35390], 95.00th=[61080], 00:11:31.748 | 99.00th=[74974], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:11:31.748 | 99.99th=[81265] 00:11:31.748 bw ( KiB/s): min=16384, max=16384, per=16.30%, avg=16384.00, stdev= 0.00, samples=2 00:11:31.748 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:11:31.748 lat (usec) : 1000=0.01% 00:11:31.748 lat (msec) : 2=0.13%, 4=1.64%, 10=54.78%, 20=18.69%, 50=19.75% 00:11:31.748 lat (msec) : 100=4.99% 00:11:31.748 cpu : usr=2.20%, sys=4.10%, ctx=425, majf=0, minf=1 00:11:31.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.748 issued rwts: total=3837,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.748 job2: (groupid=0, jobs=1): err= 0: pid=3742212: Wed Nov 27 09:43:46 2024 00:11:31.748 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:11:31.748 slat (nsec): min=967, max=12639k, avg=71752.91, stdev=552645.45 00:11:31.748 clat (usec): min=1815, max=53469, avg=10556.67, stdev=5546.55 00:11:31.748 lat (usec): min=1857, max=61256, avg=10628.43, stdev=5574.68 00:11:31.748 clat percentiles (usec): 00:11:31.748 | 1.00th=[ 3195], 5.00th=[ 6194], 10.00th=[ 6915], 20.00th=[ 7570], 00:11:31.748 | 30.00th=[ 8356], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10290], 00:11:31.748 | 70.00th=[11076], 80.00th=[11863], 90.00th=[13829], 95.00th=[17695], 00:11:31.748 | 99.00th=[45876], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:11:31.748 | 99.99th=[53216] 00:11:31.748 write: IOPS=6513, BW=25.4MiB/s (26.7MB/s)(25.6MiB/1005msec); 0 zone resets 00:11:31.748 slat (nsec): min=1642, max=27732k, avg=71269.36, stdev=571396.95 00:11:31.748 clat (usec): min=752, max=43487, avg=9538.11, stdev=5506.46 00:11:31.748 lat (usec): min=762, max=46501, avg=9609.38, stdev=5554.23 00:11:31.748 clat percentiles (usec): 00:11:31.748 | 1.00th=[ 1713], 5.00th=[ 3916], 10.00th=[ 4817], 20.00th=[ 6128], 00:11:31.748 | 30.00th=[ 7242], 40.00th=[ 7701], 50.00th=[ 8291], 60.00th=[ 9241], 00:11:31.748 | 70.00th=[10159], 80.00th=[10945], 90.00th=[14484], 95.00th=[19792], 00:11:31.748 | 99.00th=[35914], 99.50th=[40109], 99.90th=[42730], 99.95th=[43254], 00:11:31.748 | 99.99th=[43254] 00:11:31.748 bw ( KiB/s): min=25608, max=25736, per=25.54%, avg=25672.00, stdev=90.51, samples=2 00:11:31.748 iops : min= 6402, max= 6434, avg=6418.00, stdev=22.63, samples=2 00:11:31.748 lat (usec) : 1000=0.02% 00:11:31.748 lat (msec) : 2=0.69%, 4=3.10%, 10=57.19%, 20=35.40%, 50=3.58% 00:11:31.748 lat (msec) : 100=0.02% 00:11:31.748 cpu : usr=5.58%, sys=6.47%, ctx=447, majf=0, minf=1 00:11:31.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.748 issued rwts: total=6144,6546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.748 job3: (groupid=0, jobs=1): err= 0: pid=3742214: Wed Nov 27 09:43:46 2024 00:11:31.748 read: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec) 00:11:31.748 slat (nsec): min=1015, max=8624.9k, avg=67329.55, stdev=487871.31 00:11:31.748 clat (usec): min=2447, max=18878, avg=8894.15, stdev=2182.72 00:11:31.748 lat (usec): min=2454, max=18909, avg=8961.48, stdev=2208.65 00:11:31.748 clat percentiles (usec): 00:11:31.748 | 1.00th=[ 4293], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 7242], 00:11:31.748 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:11:31.748 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[11600], 95.00th=[13435], 00:11:31.748 | 99.00th=[15664], 99.50th=[16581], 99.90th=[17957], 99.95th=[18744], 00:11:31.748 | 99.99th=[19006] 00:11:31.748 write: IOPS=7829, BW=30.6MiB/s (32.1MB/s)(30.7MiB/1005msec); 0 zone resets 00:11:31.748 slat (nsec): min=1737, max=7450.5k, avg=55633.17, stdev=356465.39 00:11:31.748 clat (usec): min=1533, max=18635, avg=7494.38, stdev=1790.05 00:11:31.748 lat (usec): min=1541, max=18644, avg=7550.01, stdev=1809.77 00:11:31.748 clat percentiles (usec): 00:11:31.748 | 1.00th=[ 2769], 5.00th=[ 4293], 10.00th=[ 5014], 20.00th=[ 5997], 00:11:31.748 | 30.00th=[ 7177], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8029], 00:11:31.748 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[10290], 00:11:31.748 | 99.00th=[11994], 99.50th=[11994], 99.90th=[15401], 99.95th=[17171], 00:11:31.748 | 99.99th=[18744] 00:11:31.748 bw ( KiB/s): min=30928, max=31008, per=30.80%, avg=30968.00, stdev=56.57, samples=2 00:11:31.748 iops : min= 7732, max= 7752, avg=7742.00, stdev=14.14, samples=2 00:11:31.748 lat (msec) : 2=0.10%, 4=1.96%, 10=82.16%, 20=15.78% 00:11:31.748 cpu : usr=5.68%, sys=8.47%, ctx=693, majf=0, minf=1 00:11:31.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.748 issued rwts: total=7680,7869,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.748 00:11:31.748 Run status group 0 (all jobs): 00:11:31.748 READ: bw=94.5MiB/s (99.1MB/s), 15.0MiB/s-29.9MiB/s (15.7MB/s-31.3MB/s), io=95.0MiB (99.6MB), run=1002-1005msec 00:11:31.748 WRITE: bw=98.2MiB/s (103MB/s), 16.0MiB/s-30.6MiB/s (16.7MB/s-32.1MB/s), io=98.7MiB (103MB), run=1002-1005msec 00:11:31.748 00:11:31.748 Disk stats (read/write): 00:11:31.748 nvme0n1: ios=5685/5921, merge=0/0, ticks=43028/38339, in_queue=81367, util=83.27% 00:11:31.748 nvme0n2: ios=2755/3054, merge=0/0, ticks=18814/22025, in_queue=40839, util=86.84% 00:11:31.748 nvme0n3: ios=5653/5649, merge=0/0, ticks=47017/39238, in_queue=86255, util=94.83% 00:11:31.749 nvme0n4: ios=6201/6629, merge=0/0, ticks=52838/47972, in_queue=100810, util=94.87% 00:11:31.749 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:31.749 [global] 00:11:31.749 thread=1 00:11:31.749 invalidate=1 00:11:31.749 rw=randwrite 00:11:31.749 time_based=1 00:11:31.749 runtime=1 00:11:31.749 ioengine=libaio 00:11:31.749 direct=1 00:11:31.749 bs=4096 00:11:31.749 iodepth=128 00:11:31.749 norandommap=0 00:11:31.749 numjobs=1 00:11:31.749 00:11:31.749 verify_dump=1 00:11:31.749 verify_backlog=512 00:11:31.749 verify_state_save=0 00:11:31.749 do_verify=1 00:11:31.749 verify=crc32c-intel 00:11:31.749 [job0] 00:11:31.749 filename=/dev/nvme0n1 00:11:31.749 [job1] 00:11:31.749 filename=/dev/nvme0n2 00:11:31.749 [job2] 00:11:31.749 filename=/dev/nvme0n3 00:11:31.749 [job3] 00:11:31.749 filename=/dev/nvme0n4 00:11:31.749 Could not set queue depth (nvme0n1) 00:11:31.749 Could not set queue depth (nvme0n2) 00:11:31.749 Could not set queue depth (nvme0n3) 00:11:31.749 Could not set queue depth (nvme0n4) 00:11:32.009 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:32.009 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:32.009 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:32.009 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:32.009 fio-3.35 00:11:32.009 Starting 4 threads 00:11:33.396 00:11:33.396 job0: (groupid=0, jobs=1): err= 0: pid=3742730: Wed Nov 27 09:43:48 2024 00:11:33.396 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:11:33.396 slat (nsec): min=944, max=7597.0k, avg=58480.90, stdev=403070.43 00:11:33.396 clat (usec): min=3308, max=31345, avg=8280.60, stdev=3939.57 00:11:33.396 lat (usec): min=3587, max=31347, avg=8339.08, stdev=3966.57 00:11:33.396 clat percentiles (usec): 00:11:33.396 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5669], 20.00th=[ 6128], 00:11:33.396 | 30.00th=[ 6325], 40.00th=[ 6587], 50.00th=[ 7111], 60.00th=[ 7570], 00:11:33.396 | 70.00th=[ 8291], 80.00th=[ 8979], 90.00th=[11600], 95.00th=[17171], 00:11:33.396 | 99.00th=[26084], 99.50th=[28443], 99.90th=[28705], 99.95th=[28705], 00:11:33.396 | 99.99th=[31327] 00:11:33.396 write: IOPS=7630, BW=29.8MiB/s (31.3MB/s)(29.9MiB/1003msec); 0 zone resets 00:11:33.396 slat (nsec): min=1571, max=56061k, avg=64758.12, stdev=785523.18 00:11:33.396 clat (usec): min=1270, max=60919, avg=8867.79, stdev=7885.22 00:11:33.396 lat (usec): min=1283, max=60964, avg=8932.55, stdev=7923.79 00:11:33.396 clat percentiles (usec): 00:11:33.396 | 1.00th=[ 2442], 5.00th=[ 3916], 10.00th=[ 4686], 20.00th=[ 5538], 00:11:33.396 | 30.00th=[ 5800], 40.00th=[ 6063], 50.00th=[ 6718], 60.00th=[ 7308], 00:11:33.396 | 70.00th=[ 8094], 80.00th=[10028], 90.00th=[13960], 95.00th=[23725], 00:11:33.396 | 99.00th=[57410], 99.50th=[57410], 99.90th=[60031], 99.95th=[60031], 00:11:33.396 | 99.99th=[61080] 00:11:33.396 bw ( KiB/s): min=28360, max=31848, per=32.83%, avg=30104.00, stdev=2466.39, samples=2 00:11:33.396 iops : min= 7090, max= 7962, avg=7526.00, stdev=616.60, samples=2 00:11:33.396 lat (msec) : 2=0.26%, 4=2.94%, 10=79.21%, 20=13.38%, 50=3.35% 00:11:33.396 lat (msec) : 100=0.86% 00:11:33.396 cpu : usr=5.89%, sys=6.39%, ctx=650, majf=0, minf=2 00:11:33.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:33.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.396 issued rwts: total=7168,7653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.396 job1: (groupid=0, jobs=1): err= 0: pid=3742732: Wed Nov 27 09:43:48 2024 00:11:33.396 read: IOPS=3229, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1004msec) 00:11:33.396 slat (nsec): min=1017, max=12659k, avg=137027.70, stdev=859962.58 00:11:33.396 clat (usec): min=2330, max=72868, avg=16416.95, stdev=13479.72 00:11:33.396 lat (usec): min=2340, max=76690, avg=16553.98, stdev=13583.89 00:11:33.396 clat percentiles (usec): 00:11:33.396 | 1.00th=[ 3130], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 6128], 00:11:33.396 | 30.00th=[ 7963], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[13435], 00:11:33.396 | 70.00th=[21103], 80.00th=[23725], 90.00th=[34341], 95.00th=[44827], 00:11:33.396 | 99.00th=[67634], 99.50th=[67634], 99.90th=[72877], 99.95th=[72877], 00:11:33.396 | 99.99th=[72877] 00:11:33.396 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:11:33.396 slat (nsec): min=1659, max=14730k, avg=145221.85, stdev=801572.01 00:11:33.396 clat (usec): min=695, max=87668, avg=20507.96, stdev=16609.16 00:11:33.396 lat (usec): min=703, max=87676, avg=20653.18, stdev=16698.99 00:11:33.396 clat percentiles (usec): 00:11:33.396 | 1.00th=[ 1795], 5.00th=[ 4883], 10.00th=[ 5800], 20.00th=[ 7046], 00:11:33.396 | 30.00th=[11207], 40.00th=[15139], 50.00th=[16909], 60.00th=[18744], 00:11:33.396 | 70.00th=[21627], 80.00th=[27132], 90.00th=[40109], 95.00th=[62653], 00:11:33.396 | 99.00th=[80217], 99.50th=[87557], 99.90th=[87557], 99.95th=[87557], 00:11:33.396 | 99.99th=[87557] 00:11:33.396 bw ( KiB/s): min= 9264, max=19408, per=15.64%, avg=14336.00, stdev=7172.89, samples=2 00:11:33.396 iops : min= 2316, max= 4852, avg=3584.00, stdev=1793.22, samples=2 00:11:33.396 lat (usec) : 750=0.04% 00:11:33.396 lat (msec) : 2=0.53%, 4=2.05%, 10=34.76%, 20=29.30%, 50=27.82% 00:11:33.396 lat (msec) : 100=5.49% 00:11:33.396 cpu : usr=3.29%, sys=2.69%, ctx=413, majf=0, minf=1 00:11:33.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:33.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.397 issued rwts: total=3242,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.397 job2: (groupid=0, jobs=1): err= 0: pid=3742733: Wed Nov 27 09:43:48 2024 00:11:33.397 read: IOPS=5236, BW=20.5MiB/s (21.4MB/s)(20.5MiB/1002msec) 00:11:33.397 slat (nsec): min=997, max=25023k, avg=91447.32, stdev=774624.25 00:11:33.397 clat (usec): min=1209, max=45822, avg=13065.56, stdev=7815.80 00:11:33.397 lat (usec): min=2588, max=45829, avg=13157.00, stdev=7887.29 00:11:33.397 clat percentiles (usec): 00:11:33.397 | 1.00th=[ 4113], 5.00th=[ 5604], 10.00th=[ 6390], 20.00th=[ 7242], 00:11:33.397 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9634], 60.00th=[11207], 00:11:33.397 | 70.00th=[13566], 80.00th=[21103], 90.00th=[26084], 95.00th=[27919], 00:11:33.397 | 99.00th=[36439], 99.50th=[37487], 99.90th=[45876], 99.95th=[45876], 00:11:33.397 | 99.99th=[45876] 00:11:33.397 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:11:33.397 slat (nsec): min=1653, max=14511k, avg=67744.28, stdev=647629.68 00:11:33.397 clat (usec): min=374, max=39345, avg=10377.17, stdev=7466.01 00:11:33.397 lat (usec): min=402, max=39354, avg=10444.92, stdev=7525.50 00:11:33.397 clat percentiles (usec): 00:11:33.397 | 1.00th=[ 1012], 5.00th=[ 2409], 10.00th=[ 3130], 20.00th=[ 4178], 00:11:33.397 | 30.00th=[ 5669], 40.00th=[ 7111], 50.00th=[ 7570], 60.00th=[ 8848], 00:11:33.397 | 70.00th=[11863], 80.00th=[16057], 90.00th=[22152], 95.00th=[25822], 00:11:33.397 | 99.00th=[33424], 99.50th=[35914], 99.90th=[39584], 99.95th=[39584], 00:11:33.397 | 99.99th=[39584] 00:11:33.397 bw ( KiB/s): min=20480, max=24576, per=24.57%, avg=22528.00, stdev=2896.31, samples=2 00:11:33.397 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:11:33.397 lat (usec) : 500=0.05%, 750=0.13%, 1000=0.34% 00:11:33.397 lat (msec) : 2=1.54%, 4=7.94%, 10=48.23%, 20=22.68%, 50=19.09% 00:11:33.397 cpu : usr=4.30%, sys=7.39%, ctx=358, majf=0, minf=1 00:11:33.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:33.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.397 issued rwts: total=5247,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.397 job3: (groupid=0, jobs=1): err= 0: pid=3742734: Wed Nov 27 09:43:48 2024 00:11:33.397 read: IOPS=5673, BW=22.2MiB/s (23.2MB/s)(22.2MiB/1004msec) 00:11:33.397 slat (nsec): min=945, max=9417.6k, avg=82272.15, stdev=566294.48 00:11:33.397 clat (usec): min=916, max=35496, avg=10986.67, stdev=4703.33 00:11:33.397 lat (usec): min=1082, max=35498, avg=11068.94, stdev=4735.92 00:11:33.397 clat percentiles (usec): 00:11:33.397 | 1.00th=[ 1631], 5.00th=[ 3261], 10.00th=[ 5604], 20.00th=[ 8160], 00:11:33.397 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[10814], 00:11:33.397 | 70.00th=[11994], 80.00th=[14353], 90.00th=[17957], 95.00th=[20579], 00:11:33.397 | 99.00th=[24249], 99.50th=[26084], 99.90th=[27919], 99.95th=[35390], 00:11:33.397 | 99.99th=[35390] 00:11:33.397 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:11:33.397 slat (nsec): min=1636, max=11112k, avg=74305.35, stdev=486916.86 00:11:33.397 clat (usec): min=617, max=32272, avg=10469.45, stdev=5649.49 00:11:33.397 lat (usec): min=710, max=32305, avg=10543.76, stdev=5684.76 00:11:33.397 clat percentiles (usec): 00:11:33.397 | 1.00th=[ 1713], 5.00th=[ 4146], 10.00th=[ 5145], 20.00th=[ 6390], 00:11:33.397 | 30.00th=[ 6980], 40.00th=[ 8029], 50.00th=[ 8979], 60.00th=[ 9503], 00:11:33.397 | 70.00th=[11207], 80.00th=[15139], 90.00th=[19268], 95.00th=[21365], 00:11:33.397 | 99.00th=[28967], 99.50th=[29492], 99.90th=[32113], 99.95th=[32375], 00:11:33.397 | 99.99th=[32375] 00:11:33.397 bw ( KiB/s): min=19976, max=28664, per=26.53%, avg=24320.00, stdev=6143.34, samples=2 00:11:33.397 iops : min= 4994, max= 7166, avg=6080.00, stdev=1535.84, samples=2 00:11:33.397 lat (usec) : 750=0.06%, 1000=0.04% 00:11:33.397 lat (msec) : 2=1.37%, 4=3.64%, 10=51.50%, 20=37.32%, 50=6.06% 00:11:33.397 cpu : usr=3.99%, sys=7.18%, ctx=457, majf=0, minf=1 00:11:33.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:33.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:33.397 issued rwts: total=5696,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:33.397 00:11:33.397 Run status group 0 (all jobs): 00:11:33.397 READ: bw=83.1MiB/s (87.1MB/s), 12.6MiB/s-27.9MiB/s (13.2MB/s-29.3MB/s), io=83.4MiB (87.5MB), run=1002-1004msec 00:11:33.397 WRITE: bw=89.5MiB/s (93.9MB/s), 13.9MiB/s-29.8MiB/s (14.6MB/s-31.3MB/s), io=89.9MiB (94.3MB), run=1002-1004msec 00:11:33.397 00:11:33.397 Disk stats (read/write): 00:11:33.397 nvme0n1: ios=6708/6751, merge=0/0, ticks=35271/33533, in_queue=68804, util=84.57% 00:11:33.397 nvme0n2: ios=2649/3072, merge=0/0, ticks=23190/40059, in_queue=63249, util=89.09% 00:11:33.397 nvme0n3: ios=3770/4096, merge=0/0, ticks=36912/32983, in_queue=69895, util=92.51% 00:11:33.397 nvme0n4: ios=5178/5463, merge=0/0, ticks=46928/44708, in_queue=91636, util=94.02% 00:11:33.397 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:33.397 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3743066 00:11:33.397 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:33.397 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:33.397 [global] 00:11:33.397 thread=1 00:11:33.397 invalidate=1 00:11:33.397 rw=read 00:11:33.397 time_based=1 00:11:33.397 runtime=10 00:11:33.397 ioengine=libaio 00:11:33.397 direct=1 00:11:33.397 bs=4096 00:11:33.397 iodepth=1 00:11:33.397 norandommap=1 00:11:33.397 numjobs=1 00:11:33.397 00:11:33.397 [job0] 00:11:33.397 filename=/dev/nvme0n1 00:11:33.397 [job1] 00:11:33.397 filename=/dev/nvme0n2 00:11:33.397 [job2] 00:11:33.397 filename=/dev/nvme0n3 00:11:33.397 [job3] 00:11:33.397 filename=/dev/nvme0n4 00:11:33.397 Could not set queue depth (nvme0n1) 00:11:33.397 Could not set queue depth (nvme0n2) 00:11:33.397 Could not set queue depth (nvme0n3) 00:11:33.397 Could not set queue depth (nvme0n4) 00:11:33.658 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.658 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.658 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.658 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.658 fio-3.35 00:11:33.658 Starting 4 threads 00:11:36.234 09:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:36.494 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10993664, buflen=4096 00:11:36.494 fio: pid=3743263, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:36.494 09:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:36.756 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2502656, buflen=4096 00:11:36.756 fio: pid=3743262, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:36.756 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:36.756 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:36.756 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:36.756 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:36.756 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1003520, buflen=4096 00:11:36.756 fio: pid=3743256, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:37.016 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:37.016 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:37.016 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1974272, buflen=4096 00:11:37.016 fio: pid=3743257, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:37.017 00:11:37.017 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3743256: Wed Nov 27 09:43:52 2024 00:11:37.017 read: IOPS=82, BW=330KiB/s (338kB/s)(980KiB/2970msec) 00:11:37.017 slat (usec): min=7, max=713, avg=28.79, stdev=45.63 00:11:37.017 clat (usec): min=444, max=42143, avg=11994.53, stdev=18013.50 00:11:37.017 lat (usec): min=470, max=42168, avg=12023.33, stdev=18019.41 00:11:37.017 clat percentiles (usec): 00:11:37.017 | 1.00th=[ 619], 5.00th=[ 873], 10.00th=[ 971], 20.00th=[ 1037], 00:11:37.017 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1188], 00:11:37.017 | 70.00th=[ 1237], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:11:37.017 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:37.017 | 99.99th=[42206] 00:11:37.017 bw ( KiB/s): min= 96, max= 1000, per=6.59%, avg=336.00, stdev=388.52, samples=5 00:11:37.017 iops : min= 24, max= 250, avg=84.00, stdev=97.13, samples=5 00:11:37.017 lat (usec) : 500=0.41%, 750=1.63%, 1000=9.35% 00:11:37.017 lat (msec) : 2=61.38%, 50=26.83% 00:11:37.017 cpu : usr=0.07%, sys=0.24%, ctx=248, majf=0, minf=1 00:11:37.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.017 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.017 issued rwts: total=246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.017 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3743257: Wed Nov 27 09:43:52 2024 00:11:37.017 read: IOPS=153, BW=611KiB/s (626kB/s)(1928KiB/3153msec) 00:11:37.017 slat (usec): min=7, max=28693, avg=132.82, stdev=1504.63 00:11:37.017 clat (usec): min=408, max=41592, avg=6357.32, stdev=13952.67 00:11:37.017 lat (usec): min=434, max=41618, avg=6490.36, stdev=13992.05 00:11:37.017 clat percentiles (usec): 00:11:37.017 | 1.00th=[ 461], 5.00th=[ 562], 10.00th=[ 644], 20.00th=[ 693], 00:11:37.017 | 30.00th=[ 750], 40.00th=[ 766], 50.00th=[ 791], 60.00th=[ 807], 00:11:37.017 | 70.00th=[ 824], 80.00th=[ 889], 90.00th=[41157], 95.00th=[41157], 00:11:37.017 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:37.017 | 99.99th=[41681] 00:11:37.017 bw ( KiB/s): min= 144, max= 1880, per=11.90%, avg=607.67, stdev=727.38, samples=6 00:11:37.017 iops : min= 36, max= 470, avg=151.83, stdev=181.78, samples=6 00:11:37.017 lat (usec) : 500=1.66%, 750=29.19%, 1000=54.45% 00:11:37.017 lat (msec) : 2=0.62%, 50=13.87% 00:11:37.017 cpu : usr=0.06%, sys=0.51%, ctx=489, majf=0, minf=2 00:11:37.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.017 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.017 issued rwts: total=483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.017 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3743262: Wed Nov 27 09:43:52 2024 00:11:37.017 read: IOPS=220, BW=880KiB/s (901kB/s)(2444KiB/2778msec) 00:11:37.017 slat (nsec): min=6268, max=61314, avg=25773.91, stdev=3459.15 00:11:37.017 clat (usec): min=614, max=45109, avg=4477.68, stdev=11154.27 00:11:37.017 lat (usec): min=623, max=45135, avg=4503.46, stdev=11154.24 00:11:37.017 clat percentiles (usec): 00:11:37.017 | 1.00th=[ 709], 5.00th=[ 1012], 10.00th=[ 1045], 20.00th=[ 1074], 00:11:37.017 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1156], 00:11:37.017 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1254], 95.00th=[41157], 00:11:37.017 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:11:37.017 | 99.99th=[45351] 00:11:37.017 bw ( KiB/s): min= 472, max= 1040, per=16.70%, avg=852.80, stdev=230.66, samples=5 00:11:37.017 iops : min= 118, max= 260, avg=213.20, stdev=57.66, samples=5 00:11:37.017 lat (usec) : 750=1.31%, 1000=3.27% 00:11:37.017 lat (msec) : 2=86.93%, 50=8.33% 00:11:37.017 cpu : usr=0.25%, sys=0.61%, ctx=613, majf=0, minf=2 00:11:37.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.017 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.017 issued rwts: total=612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.017 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3743263: Wed Nov 27 09:43:52 2024 00:11:37.017 read: IOPS=1032, BW=4128KiB/s (4227kB/s)(10.5MiB/2601msec) 00:11:37.017 slat (nsec): min=3487, max=62476, avg=23184.03, stdev=8369.66 00:11:37.017 clat (usec): min=215, max=42043, avg=932.26, stdev=2591.31 00:11:37.017 lat (usec): min=242, max=42069, avg=955.44, stdev=2591.63 00:11:37.017 clat percentiles (usec): 00:11:37.017 | 1.00th=[ 494], 5.00th=[ 611], 10.00th=[ 660], 20.00th=[ 693], 00:11:37.017 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 775], 60.00th=[ 791], 00:11:37.017 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 881], 00:11:37.017 | 99.00th=[ 1205], 99.50th=[ 1336], 99.90th=[41681], 99.95th=[42206], 00:11:37.017 | 99.99th=[42206] 00:11:37.017 bw ( KiB/s): min= 2376, max= 5248, per=83.94%, avg=4283.20, stdev=1242.28, samples=5 00:11:37.017 iops : min= 594, max= 1312, avg=1070.80, stdev=310.57, samples=5 00:11:37.017 lat (usec) : 250=0.07%, 500=1.12%, 750=30.95%, 1000=64.28% 00:11:37.017 lat (msec) : 2=3.13%, 50=0.41% 00:11:37.017 cpu : usr=1.08%, sys=2.69%, ctx=2685, majf=0, minf=2 00:11:37.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.017 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.017 issued rwts: total=2685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.017 00:11:37.017 Run status group 0 (all jobs): 00:11:37.017 READ: bw=5102KiB/s (5225kB/s), 330KiB/s-4128KiB/s (338kB/s-4227kB/s), io=15.7MiB (16.5MB), run=2601-3153msec 00:11:37.017 00:11:37.017 Disk stats (read/write): 00:11:37.017 nvme0n1: ios=242/0, merge=0/0, ticks=2813/0, in_queue=2813, util=94.76% 00:11:37.017 nvme0n2: ios=479/0, merge=0/0, ticks=3011/0, in_queue=3011, util=94.11% 00:11:37.017 nvme0n3: ios=552/0, merge=0/0, ticks=2539/0, in_queue=2539, util=95.99% 00:11:37.017 nvme0n4: ios=2684/0, merge=0/0, ticks=2459/0, in_queue=2459, util=96.42% 00:11:37.278 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:37.278 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:37.539 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:37.539 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:37.539 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:37.539 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:37.800 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:37.800 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3743066 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:38.061 nvmf hotplug test: fio failed as expected 00:11:38.061 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:38.322 rmmod nvme_tcp 00:11:38.322 rmmod nvme_fabrics 00:11:38.322 rmmod nvme_keyring 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3739447 ']' 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3739447 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3739447 ']' 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3739447 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3739447 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3739447' 00:11:38.322 killing process with pid 3739447 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3739447 00:11:38.322 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3739447 00:11:38.582 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:38.582 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:38.582 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:38.582 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:38.582 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:38.582 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:38.582 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:38.582 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:38.582 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:38.582 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.582 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.582 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.495 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.495 00:11:40.495 real 0m29.496s 00:11:40.495 user 2m32.328s 00:11:40.495 sys 0m9.488s 00:11:40.755 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.755 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.755 ************************************ 00:11:40.755 END TEST nvmf_fio_target 00:11:40.755 ************************************ 00:11:40.755 09:43:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:40.755 09:43:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:40.755 09:43:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.755 09:43:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:40.755 ************************************ 00:11:40.755 START TEST nvmf_bdevio 00:11:40.755 ************************************ 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:40.755 * Looking for test storage... 00:11:40.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.755 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.018 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:41.018 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:41.018 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.018 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:41.018 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.019 --rc genhtml_branch_coverage=1 00:11:41.019 --rc genhtml_function_coverage=1 00:11:41.019 --rc genhtml_legend=1 00:11:41.019 --rc geninfo_all_blocks=1 00:11:41.019 --rc geninfo_unexecuted_blocks=1 00:11:41.019 00:11:41.019 ' 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.019 --rc genhtml_branch_coverage=1 00:11:41.019 --rc genhtml_function_coverage=1 00:11:41.019 --rc genhtml_legend=1 00:11:41.019 --rc geninfo_all_blocks=1 00:11:41.019 --rc geninfo_unexecuted_blocks=1 00:11:41.019 00:11:41.019 ' 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.019 --rc genhtml_branch_coverage=1 00:11:41.019 --rc genhtml_function_coverage=1 00:11:41.019 --rc genhtml_legend=1 00:11:41.019 --rc geninfo_all_blocks=1 00:11:41.019 --rc geninfo_unexecuted_blocks=1 00:11:41.019 00:11:41.019 ' 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.019 --rc genhtml_branch_coverage=1 00:11:41.019 --rc genhtml_function_coverage=1 00:11:41.019 --rc genhtml_legend=1 00:11:41.019 --rc geninfo_all_blocks=1 00:11:41.019 --rc geninfo_unexecuted_blocks=1 00:11:41.019 00:11:41.019 ' 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:41.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:41.019 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:49.323 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:49.323 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:49.323 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:49.323 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:49.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:11:49.323 00:11:49.323 --- 10.0.0.2 ping statistics --- 00:11:49.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.323 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:11:49.323 00:11:49.323 --- 10.0.0.1 ping statistics --- 00:11:49.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.323 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3748517 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3748517 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3748517 ']' 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.323 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:49.323 [2024-11-27 09:44:03.816076] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:11:49.323 [2024-11-27 09:44:03.816147] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.323 [2024-11-27 09:44:03.914783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.324 [2024-11-27 09:44:03.967309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.324 [2024-11-27 09:44:03.967361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.324 [2024-11-27 09:44:03.967370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.324 [2024-11-27 09:44:03.967378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.324 [2024-11-27 09:44:03.967384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.324 [2024-11-27 09:44:03.969442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:49.324 [2024-11-27 09:44:03.969583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:49.324 [2024-11-27 09:44:03.969744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:49.324 [2024-11-27 09:44:03.969745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:49.324 [2024-11-27 09:44:04.697038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:49.324 Malloc0 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:49.324 [2024-11-27 09:44:04.773051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:49.324 { 00:11:49.324 "params": { 00:11:49.324 "name": "Nvme$subsystem", 00:11:49.324 "trtype": "$TEST_TRANSPORT", 00:11:49.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:49.324 "adrfam": "ipv4", 00:11:49.324 "trsvcid": "$NVMF_PORT", 00:11:49.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:49.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:49.324 "hdgst": ${hdgst:-false}, 00:11:49.324 "ddgst": ${ddgst:-false} 00:11:49.324 }, 00:11:49.324 "method": "bdev_nvme_attach_controller" 00:11:49.324 } 00:11:49.324 EOF 00:11:49.324 )") 00:11:49.324 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:49.584 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:49.584 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:49.584 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:49.584 "params": { 00:11:49.584 "name": "Nvme1", 00:11:49.584 "trtype": "tcp", 00:11:49.584 "traddr": "10.0.0.2", 00:11:49.584 "adrfam": "ipv4", 00:11:49.584 "trsvcid": "4420", 00:11:49.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:49.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:49.584 "hdgst": false, 00:11:49.584 "ddgst": false 00:11:49.584 }, 00:11:49.584 "method": "bdev_nvme_attach_controller" 00:11:49.584 }' 00:11:49.584 [2024-11-27 09:44:04.832651] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:11:49.584 [2024-11-27 09:44:04.832719] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748663 ] 00:11:49.584 [2024-11-27 09:44:04.927360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:49.584 [2024-11-27 09:44:04.984203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.584 [2024-11-27 09:44:04.984304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.584 [2024-11-27 09:44:04.984305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.845 I/O targets: 00:11:49.845 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:49.845 00:11:49.845 00:11:49.845 CUnit - A unit testing framework for C - Version 2.1-3 00:11:49.845 http://cunit.sourceforge.net/ 00:11:49.845 00:11:49.845 00:11:49.845 Suite: bdevio tests on: Nvme1n1 00:11:49.845 Test: blockdev write read block ...passed 00:11:49.845 Test: blockdev write zeroes read block ...passed 00:11:49.845 Test: blockdev write zeroes read no split ...passed 00:11:49.845 Test: blockdev write zeroes read split ...passed 00:11:49.845 Test: blockdev write zeroes read split partial ...passed 00:11:49.845 Test: blockdev reset ...[2024-11-27 09:44:05.288210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:49.845 [2024-11-27 09:44:05.288301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25970 (9): Bad file descriptor 00:11:49.845 [2024-11-27 09:44:05.303096] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:49.845 passed 00:11:49.845 Test: blockdev write read 8 blocks ...passed 00:11:49.845 Test: blockdev write read size > 128k ...passed 00:11:49.845 Test: blockdev write read invalid size ...passed 00:11:50.107 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:50.107 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:50.107 Test: blockdev write read max offset ...passed 00:11:50.107 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:50.107 Test: blockdev writev readv 8 blocks ...passed 00:11:50.107 Test: blockdev writev readv 30 x 1block ...passed 00:11:50.107 Test: blockdev writev readv block ...passed 00:11:50.107 Test: blockdev writev readv size > 128k ...passed 00:11:50.107 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:50.107 Test: blockdev comparev and writev ...[2024-11-27 09:44:05.486362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.107 [2024-11-27 09:44:05.486409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:50.107 [2024-11-27 09:44:05.486426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.107 [2024-11-27 09:44:05.486435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:50.107 [2024-11-27 09:44:05.486988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.107 [2024-11-27 09:44:05.487002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:50.107 [2024-11-27 09:44:05.487016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.107 [2024-11-27 09:44:05.487024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:50.107 [2024-11-27 09:44:05.487541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.107 [2024-11-27 09:44:05.487560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:50.107 [2024-11-27 09:44:05.487574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.107 [2024-11-27 09:44:05.487583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:50.107 [2024-11-27 09:44:05.488105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.107 [2024-11-27 09:44:05.488116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:50.107 [2024-11-27 09:44:05.488130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.107 [2024-11-27 09:44:05.488138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:50.107 passed 00:11:50.107 Test: blockdev nvme passthru rw ...passed 00:11:50.107 Test: blockdev nvme passthru vendor specific ...[2024-11-27 09:44:05.573017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:50.107 [2024-11-27 09:44:05.573033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:50.107 [2024-11-27 09:44:05.573404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:50.107 [2024-11-27 09:44:05.573415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:50.368 [2024-11-27 09:44:05.573839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:50.368 [2024-11-27 09:44:05.573854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:50.369 [2024-11-27 09:44:05.574238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:50.369 [2024-11-27 09:44:05.574250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:50.369 passed 00:11:50.369 Test: blockdev nvme admin passthru ...passed 00:11:50.369 Test: blockdev copy ...passed 00:11:50.369 00:11:50.369 Run Summary: Type Total Ran Passed Failed Inactive 00:11:50.369 suites 1 1 n/a 0 0 00:11:50.369 tests 23 23 23 0 0 00:11:50.369 asserts 152 152 152 0 n/a 00:11:50.369 00:11:50.369 Elapsed time = 0.962 seconds 00:11:50.369 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.369 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.369 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:50.369 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.369 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:50.369 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:50.369 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:50.369 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:50.369 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:50.369 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:50.369 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:50.369 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:50.369 rmmod nvme_tcp 00:11:50.369 rmmod nvme_fabrics 00:11:50.369 rmmod nvme_keyring 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3748517 ']' 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3748517 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3748517 ']' 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3748517 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3748517 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3748517' 00:11:50.630 killing process with pid 3748517 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3748517 00:11:50.630 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3748517 00:11:50.630 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:50.630 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:50.630 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:50.630 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:50.630 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:50.630 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:50.630 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:50.630 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.630 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:50.630 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.892 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.892 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.807 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:52.808 00:11:52.808 real 0m12.133s 00:11:52.808 user 0m12.438s 00:11:52.808 sys 0m6.310s 00:11:52.808 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.808 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:52.808 ************************************ 00:11:52.808 END TEST nvmf_bdevio 00:11:52.808 ************************************ 00:11:52.808 09:44:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:52.808 00:11:52.808 real 5m5.042s 00:11:52.808 user 11m45.552s 00:11:52.808 sys 1m52.243s 00:11:52.808 09:44:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.808 09:44:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:52.808 ************************************ 00:11:52.808 END TEST nvmf_target_core 00:11:52.808 ************************************ 00:11:52.808 09:44:08 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:52.808 09:44:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:52.808 09:44:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.808 09:44:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:53.069 ************************************ 00:11:53.069 START TEST nvmf_target_extra 00:11:53.069 ************************************ 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:53.069 * Looking for test storage... 00:11:53.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.069 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:53.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.070 --rc genhtml_branch_coverage=1 00:11:53.070 --rc genhtml_function_coverage=1 00:11:53.070 --rc genhtml_legend=1 00:11:53.070 --rc geninfo_all_blocks=1 00:11:53.070 --rc geninfo_unexecuted_blocks=1 00:11:53.070 00:11:53.070 ' 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:53.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.070 --rc genhtml_branch_coverage=1 00:11:53.070 --rc genhtml_function_coverage=1 00:11:53.070 --rc genhtml_legend=1 00:11:53.070 --rc geninfo_all_blocks=1 00:11:53.070 --rc geninfo_unexecuted_blocks=1 00:11:53.070 00:11:53.070 ' 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:53.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.070 --rc genhtml_branch_coverage=1 00:11:53.070 --rc genhtml_function_coverage=1 00:11:53.070 --rc genhtml_legend=1 00:11:53.070 --rc geninfo_all_blocks=1 00:11:53.070 --rc geninfo_unexecuted_blocks=1 00:11:53.070 00:11:53.070 ' 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:53.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.070 --rc genhtml_branch_coverage=1 00:11:53.070 --rc genhtml_function_coverage=1 00:11:53.070 --rc genhtml_legend=1 00:11:53.070 --rc geninfo_all_blocks=1 00:11:53.070 --rc geninfo_unexecuted_blocks=1 00:11:53.070 00:11:53.070 ' 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.070 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:53.331 ************************************ 00:11:53.331 START TEST nvmf_example 00:11:53.331 ************************************ 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:53.331 * Looking for test storage... 00:11:53.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.331 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:53.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.332 --rc genhtml_branch_coverage=1 00:11:53.332 --rc genhtml_function_coverage=1 00:11:53.332 --rc genhtml_legend=1 00:11:53.332 --rc geninfo_all_blocks=1 00:11:53.332 --rc geninfo_unexecuted_blocks=1 00:11:53.332 00:11:53.332 ' 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:53.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.332 --rc genhtml_branch_coverage=1 00:11:53.332 --rc genhtml_function_coverage=1 00:11:53.332 --rc genhtml_legend=1 00:11:53.332 --rc geninfo_all_blocks=1 00:11:53.332 --rc geninfo_unexecuted_blocks=1 00:11:53.332 00:11:53.332 ' 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:53.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.332 --rc genhtml_branch_coverage=1 00:11:53.332 --rc genhtml_function_coverage=1 00:11:53.332 --rc genhtml_legend=1 00:11:53.332 --rc geninfo_all_blocks=1 00:11:53.332 --rc geninfo_unexecuted_blocks=1 00:11:53.332 00:11:53.332 ' 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:53.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.332 --rc genhtml_branch_coverage=1 00:11:53.332 --rc genhtml_function_coverage=1 00:11:53.332 --rc genhtml_legend=1 00:11:53.332 --rc geninfo_all_blocks=1 00:11:53.332 --rc geninfo_unexecuted_blocks=1 00:11:53.332 00:11:53.332 ' 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.332 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:53.593 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:53.594 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:01.742 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:01.742 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.742 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:01.743 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:01.743 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:01.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:12:01.743 00:12:01.743 --- 10.0.0.2 ping statistics --- 00:12:01.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.743 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:12:01.743 00:12:01.743 --- 10.0.0.1 ping statistics --- 00:12:01.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.743 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3753386 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3753386 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3753386 ']' 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.743 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.744 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.744 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:02.005 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.005 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:02.005 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:02.005 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:02.005 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:02.005 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:02.005 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:02.006 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:14.240 Initializing NVMe Controllers 00:12:14.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:14.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:14.240 Initialization complete. Launching workers. 00:12:14.240 ======================================================== 00:12:14.240 Latency(us) 00:12:14.240 Device Information : IOPS MiB/s Average min max 00:12:14.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18704.18 73.06 3421.37 635.28 16176.88 00:12:14.240 ======================================================== 00:12:14.240 Total : 18704.18 73.06 3421.37 635.28 16176.88 00:12:14.240 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.240 rmmod nvme_tcp 00:12:14.240 rmmod nvme_fabrics 00:12:14.240 rmmod nvme_keyring 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3753386 ']' 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3753386 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3753386 ']' 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3753386 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3753386 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3753386' 00:12:14.240 killing process with pid 3753386 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3753386 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3753386 00:12:14.240 nvmf threads initialize successfully 00:12:14.240 bdev subsystem init successfully 00:12:14.240 created a nvmf target service 00:12:14.240 create targets's poll groups done 00:12:14.240 all subsystems of target started 00:12:14.240 nvmf target is running 00:12:14.240 all subsystems of target stopped 00:12:14.240 destroy targets's poll groups done 00:12:14.240 destroyed the nvmf target service 00:12:14.240 bdev subsystem finish successfully 00:12:14.240 nvmf threads destroy successfully 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:14.240 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.241 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:14.241 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.241 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.241 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.501 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.762 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:14.763 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:14.763 09:44:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:14.763 00:12:14.763 real 0m21.445s 00:12:14.763 user 0m46.458s 00:12:14.763 sys 0m7.088s 00:12:14.763 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.763 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:14.763 ************************************ 00:12:14.763 END TEST nvmf_example 00:12:14.763 ************************************ 00:12:14.763 09:44:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:14.763 09:44:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.763 09:44:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.763 09:44:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.763 ************************************ 00:12:14.763 START TEST nvmf_filesystem 00:12:14.763 ************************************ 00:12:14.763 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:14.763 * Looking for test storage... 00:12:14.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.763 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:14.763 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:14.763 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.026 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:15.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.027 --rc genhtml_branch_coverage=1 00:12:15.027 --rc genhtml_function_coverage=1 00:12:15.027 --rc genhtml_legend=1 00:12:15.027 --rc geninfo_all_blocks=1 00:12:15.027 --rc geninfo_unexecuted_blocks=1 00:12:15.027 00:12:15.027 ' 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:15.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.027 --rc genhtml_branch_coverage=1 00:12:15.027 --rc genhtml_function_coverage=1 00:12:15.027 --rc genhtml_legend=1 00:12:15.027 --rc geninfo_all_blocks=1 00:12:15.027 --rc geninfo_unexecuted_blocks=1 00:12:15.027 00:12:15.027 ' 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:15.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.027 --rc genhtml_branch_coverage=1 00:12:15.027 --rc genhtml_function_coverage=1 00:12:15.027 --rc genhtml_legend=1 00:12:15.027 --rc geninfo_all_blocks=1 00:12:15.027 --rc geninfo_unexecuted_blocks=1 00:12:15.027 00:12:15.027 ' 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:15.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.027 --rc genhtml_branch_coverage=1 00:12:15.027 --rc genhtml_function_coverage=1 00:12:15.027 --rc genhtml_legend=1 00:12:15.027 --rc geninfo_all_blocks=1 00:12:15.027 --rc geninfo_unexecuted_blocks=1 00:12:15.027 00:12:15.027 ' 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:15.027 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:15.028 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:15.028 #define SPDK_CONFIG_H 00:12:15.028 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:15.028 #define SPDK_CONFIG_APPS 1 00:12:15.028 #define SPDK_CONFIG_ARCH native 00:12:15.028 #undef SPDK_CONFIG_ASAN 00:12:15.028 #undef SPDK_CONFIG_AVAHI 00:12:15.028 #undef SPDK_CONFIG_CET 00:12:15.028 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:15.028 #define SPDK_CONFIG_COVERAGE 1 00:12:15.028 #define SPDK_CONFIG_CROSS_PREFIX 00:12:15.028 #undef SPDK_CONFIG_CRYPTO 00:12:15.028 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:15.028 #undef SPDK_CONFIG_CUSTOMOCF 00:12:15.028 #undef SPDK_CONFIG_DAOS 00:12:15.028 #define SPDK_CONFIG_DAOS_DIR 00:12:15.028 #define SPDK_CONFIG_DEBUG 1 00:12:15.028 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:15.028 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:15.028 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:15.028 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:15.028 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:15.028 #undef SPDK_CONFIG_DPDK_UADK 00:12:15.028 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:15.028 #define SPDK_CONFIG_EXAMPLES 1 00:12:15.028 #undef SPDK_CONFIG_FC 00:12:15.028 #define SPDK_CONFIG_FC_PATH 00:12:15.028 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:15.028 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:15.028 #define SPDK_CONFIG_FSDEV 1 00:12:15.028 #undef SPDK_CONFIG_FUSE 00:12:15.028 #undef SPDK_CONFIG_FUZZER 00:12:15.029 #define SPDK_CONFIG_FUZZER_LIB 00:12:15.029 #undef SPDK_CONFIG_GOLANG 00:12:15.029 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:15.029 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:15.029 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:15.029 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:15.029 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:15.029 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:15.029 #undef SPDK_CONFIG_HAVE_LZ4 00:12:15.029 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:15.029 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:15.029 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:15.029 #define SPDK_CONFIG_IDXD 1 00:12:15.029 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:15.029 #undef SPDK_CONFIG_IPSEC_MB 00:12:15.029 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:15.029 #define SPDK_CONFIG_ISAL 1 00:12:15.029 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:15.029 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:15.029 #define SPDK_CONFIG_LIBDIR 00:12:15.029 #undef SPDK_CONFIG_LTO 00:12:15.029 #define SPDK_CONFIG_MAX_LCORES 128 00:12:15.029 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:15.029 #define SPDK_CONFIG_NVME_CUSE 1 00:12:15.029 #undef SPDK_CONFIG_OCF 00:12:15.029 #define SPDK_CONFIG_OCF_PATH 00:12:15.029 #define SPDK_CONFIG_OPENSSL_PATH 00:12:15.029 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:15.029 #define SPDK_CONFIG_PGO_DIR 00:12:15.029 #undef SPDK_CONFIG_PGO_USE 00:12:15.029 #define SPDK_CONFIG_PREFIX /usr/local 00:12:15.029 #undef SPDK_CONFIG_RAID5F 00:12:15.029 #undef SPDK_CONFIG_RBD 00:12:15.029 #define SPDK_CONFIG_RDMA 1 00:12:15.029 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:15.029 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:15.029 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:15.029 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:15.029 #define SPDK_CONFIG_SHARED 1 00:12:15.029 #undef SPDK_CONFIG_SMA 00:12:15.029 #define SPDK_CONFIG_TESTS 1 00:12:15.029 #undef SPDK_CONFIG_TSAN 00:12:15.029 #define SPDK_CONFIG_UBLK 1 00:12:15.029 #define SPDK_CONFIG_UBSAN 1 00:12:15.029 #undef SPDK_CONFIG_UNIT_TESTS 00:12:15.029 #undef SPDK_CONFIG_URING 00:12:15.029 #define SPDK_CONFIG_URING_PATH 00:12:15.029 #undef SPDK_CONFIG_URING_ZNS 00:12:15.029 #undef SPDK_CONFIG_USDT 00:12:15.029 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:15.029 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:15.029 #define SPDK_CONFIG_VFIO_USER 1 00:12:15.029 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:15.029 #define SPDK_CONFIG_VHOST 1 00:12:15.029 #define SPDK_CONFIG_VIRTIO 1 00:12:15.029 #undef SPDK_CONFIG_VTUNE 00:12:15.029 #define SPDK_CONFIG_VTUNE_DIR 00:12:15.029 #define SPDK_CONFIG_WERROR 1 00:12:15.029 #define SPDK_CONFIG_WPDK_DIR 00:12:15.029 #undef SPDK_CONFIG_XNVME 00:12:15.029 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:15.029 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:15.030 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:15.031 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3756199 ]] 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3756199 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ESBxag 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ESBxag/tests/target /tmp/spdk.ESBxag 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:15.032 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118193770496 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11162738688 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847930880 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23371776 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677126144 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1130496 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:15.033 * Looking for test storage... 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118193770496 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13377331200 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:15.033 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:15.034 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:15.034 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:15.034 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:15.034 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:15.034 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:15.034 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:15.034 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:15.034 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:15.034 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:15.034 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:15.034 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:15.034 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:15.034 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.295 --rc genhtml_branch_coverage=1 00:12:15.295 --rc genhtml_function_coverage=1 00:12:15.295 --rc genhtml_legend=1 00:12:15.295 --rc geninfo_all_blocks=1 00:12:15.295 --rc geninfo_unexecuted_blocks=1 00:12:15.295 00:12:15.295 ' 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.295 --rc genhtml_branch_coverage=1 00:12:15.295 --rc genhtml_function_coverage=1 00:12:15.295 --rc genhtml_legend=1 00:12:15.295 --rc geninfo_all_blocks=1 00:12:15.295 --rc geninfo_unexecuted_blocks=1 00:12:15.295 00:12:15.295 ' 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.295 --rc genhtml_branch_coverage=1 00:12:15.295 --rc genhtml_function_coverage=1 00:12:15.295 --rc genhtml_legend=1 00:12:15.295 --rc geninfo_all_blocks=1 00:12:15.295 --rc geninfo_unexecuted_blocks=1 00:12:15.295 00:12:15.295 ' 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.295 --rc genhtml_branch_coverage=1 00:12:15.295 --rc genhtml_function_coverage=1 00:12:15.295 --rc genhtml_legend=1 00:12:15.295 --rc geninfo_all_blocks=1 00:12:15.295 --rc geninfo_unexecuted_blocks=1 00:12:15.295 00:12:15.295 ' 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.295 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.296 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:23.438 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:23.438 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:23.438 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:23.438 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:23.439 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.439 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:23.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:12:23.439 00:12:23.439 --- 10.0.0.2 ping statistics --- 00:12:23.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.439 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:12:23.439 00:12:23.439 --- 10.0.0.1 ping statistics --- 00:12:23.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.439 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:23.439 ************************************ 00:12:23.439 START TEST nvmf_filesystem_no_in_capsule 00:12:23.439 ************************************ 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3759847 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3759847 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3759847 ']' 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.439 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.439 [2024-11-27 09:44:38.340577] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:12:23.439 [2024-11-27 09:44:38.340639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.439 [2024-11-27 09:44:38.442042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.439 [2024-11-27 09:44:38.496157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.439 [2024-11-27 09:44:38.496222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.439 [2024-11-27 09:44:38.496230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.439 [2024-11-27 09:44:38.496238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.439 [2024-11-27 09:44:38.496245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.439 [2024-11-27 09:44:38.498319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.439 [2024-11-27 09:44:38.498866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.439 [2024-11-27 09:44:38.498997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.439 [2024-11-27 09:44:38.498997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.700 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.700 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:23.700 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:23.700 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:23.700 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.960 [2024-11-27 09:44:39.203248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.960 Malloc1 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.960 [2024-11-27 09:44:39.341777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:23.960 { 00:12:23.960 "name": "Malloc1", 00:12:23.960 "aliases": [ 00:12:23.960 "902c9c89-2fe8-4199-a7df-9538782a208b" 00:12:23.960 ], 00:12:23.960 "product_name": "Malloc disk", 00:12:23.960 "block_size": 512, 00:12:23.960 "num_blocks": 1048576, 00:12:23.960 "uuid": "902c9c89-2fe8-4199-a7df-9538782a208b", 00:12:23.960 "assigned_rate_limits": { 00:12:23.960 "rw_ios_per_sec": 0, 00:12:23.960 "rw_mbytes_per_sec": 0, 00:12:23.960 "r_mbytes_per_sec": 0, 00:12:23.960 "w_mbytes_per_sec": 0 00:12:23.960 }, 00:12:23.960 "claimed": true, 00:12:23.960 "claim_type": "exclusive_write", 00:12:23.960 "zoned": false, 00:12:23.960 "supported_io_types": { 00:12:23.960 "read": true, 00:12:23.960 "write": true, 00:12:23.960 "unmap": true, 00:12:23.960 "flush": true, 00:12:23.960 "reset": true, 00:12:23.960 "nvme_admin": false, 00:12:23.960 "nvme_io": false, 00:12:23.960 "nvme_io_md": false, 00:12:23.960 "write_zeroes": true, 00:12:23.960 "zcopy": true, 00:12:23.960 "get_zone_info": false, 00:12:23.960 "zone_management": false, 00:12:23.960 "zone_append": false, 00:12:23.960 "compare": false, 00:12:23.960 "compare_and_write": false, 00:12:23.960 "abort": true, 00:12:23.960 "seek_hole": false, 00:12:23.960 "seek_data": false, 00:12:23.960 "copy": true, 00:12:23.960 "nvme_iov_md": false 00:12:23.960 }, 00:12:23.960 "memory_domains": [ 00:12:23.960 { 00:12:23.960 "dma_device_id": "system", 00:12:23.960 "dma_device_type": 1 00:12:23.960 }, 00:12:23.960 { 00:12:23.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.960 "dma_device_type": 2 00:12:23.960 } 00:12:23.960 ], 00:12:23.960 "driver_specific": {} 00:12:23.960 } 00:12:23.960 ]' 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:23.960 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:23.961 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:24.221 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:24.221 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:24.221 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:24.221 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:24.221 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.605 09:44:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.605 09:44:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:25.605 09:44:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.605 09:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:25.605 09:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:28.144 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:28.145 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:28.145 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:28.145 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:28.145 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:29.085 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:29.085 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:29.085 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:29.085 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.085 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.085 ************************************ 00:12:29.085 START TEST filesystem_ext4 00:12:29.085 ************************************ 00:12:29.085 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:29.085 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:29.086 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:29.086 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:29.086 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:29.086 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:29.086 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:29.086 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:29.086 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:29.086 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:29.086 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:29.086 mke2fs 1.47.0 (5-Feb-2023) 00:12:29.086 Discarding device blocks: 0/522240 done 00:12:29.086 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:29.086 Filesystem UUID: c0414b85-b166-4d9a-95c4-c5f891de4d00 00:12:29.086 Superblock backups stored on blocks: 00:12:29.086 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:29.086 00:12:29.086 Allocating group tables: 0/64 done 00:12:29.086 Writing inode tables: 0/64 done 00:12:29.606 Creating journal (8192 blocks): done 00:12:31.932 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:12:31.932 00:12:31.932 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:31.932 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:37.253 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3759847 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:37.514 00:12:37.514 real 0m8.511s 00:12:37.514 user 0m0.036s 00:12:37.514 sys 0m0.072s 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:37.514 ************************************ 00:12:37.514 END TEST filesystem_ext4 00:12:37.514 ************************************ 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.514 ************************************ 00:12:37.514 START TEST filesystem_btrfs 00:12:37.514 ************************************ 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:37.514 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:37.777 btrfs-progs v6.8.1 00:12:37.777 See https://btrfs.readthedocs.io for more information. 00:12:37.777 00:12:37.777 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:37.777 NOTE: several default settings have changed in version 5.15, please make sure 00:12:37.777 this does not affect your deployments: 00:12:37.777 - DUP for metadata (-m dup) 00:12:37.777 - enabled no-holes (-O no-holes) 00:12:37.777 - enabled free-space-tree (-R free-space-tree) 00:12:37.777 00:12:37.777 Label: (null) 00:12:37.777 UUID: 49bdeb13-3cb8-4fd4-93b7-f3b3a9d2ef2e 00:12:37.777 Node size: 16384 00:12:37.777 Sector size: 4096 (CPU page size: 4096) 00:12:37.777 Filesystem size: 510.00MiB 00:12:37.777 Block group profiles: 00:12:37.777 Data: single 8.00MiB 00:12:37.777 Metadata: DUP 32.00MiB 00:12:37.777 System: DUP 8.00MiB 00:12:37.777 SSD detected: yes 00:12:37.777 Zoned device: no 00:12:37.777 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:37.777 Checksum: crc32c 00:12:37.777 Number of devices: 1 00:12:37.777 Devices: 00:12:37.777 ID SIZE PATH 00:12:37.777 1 510.00MiB /dev/nvme0n1p1 00:12:37.777 00:12:37.777 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:37.777 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:38.037 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:38.037 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:38.037 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:38.037 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3759847 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:38.299 00:12:38.299 real 0m0.684s 00:12:38.299 user 0m0.018s 00:12:38.299 sys 0m0.130s 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:38.299 ************************************ 00:12:38.299 END TEST filesystem_btrfs 00:12:38.299 ************************************ 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.299 ************************************ 00:12:38.299 START TEST filesystem_xfs 00:12:38.299 ************************************ 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:38.299 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:38.300 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:38.300 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:38.300 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:38.300 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:38.300 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:38.300 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:38.300 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:38.300 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:38.300 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:38.300 = sectsz=512 attr=2, projid32bit=1 00:12:38.300 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:38.300 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:38.300 data = bsize=4096 blocks=130560, imaxpct=25 00:12:38.300 = sunit=0 swidth=0 blks 00:12:38.300 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:38.300 log =internal log bsize=4096 blocks=16384, version=2 00:12:38.300 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:38.300 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:39.244 Discarding blocks...Done. 00:12:39.244 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:39.244 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3759847 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:41.156 00:12:41.156 real 0m2.818s 00:12:41.156 user 0m0.024s 00:12:41.156 sys 0m0.081s 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:41.156 ************************************ 00:12:41.156 END TEST filesystem_xfs 00:12:41.156 ************************************ 00:12:41.156 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:41.417 09:44:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3759847 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3759847 ']' 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3759847 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3759847 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3759847' 00:12:41.989 killing process with pid 3759847 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3759847 00:12:41.989 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3759847 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:42.251 00:12:42.251 real 0m19.323s 00:12:42.251 user 1m16.252s 00:12:42.251 sys 0m1.486s 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:42.251 ************************************ 00:12:42.251 END TEST nvmf_filesystem_no_in_capsule 00:12:42.251 ************************************ 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:42.251 ************************************ 00:12:42.251 START TEST nvmf_filesystem_in_capsule 00:12:42.251 ************************************ 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3763886 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3763886 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3763886 ']' 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.251 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:42.512 [2024-11-27 09:44:57.731280] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:12:42.512 [2024-11-27 09:44:57.731333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.512 [2024-11-27 09:44:57.825902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.512 [2024-11-27 09:44:57.866360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.512 [2024-11-27 09:44:57.866399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.512 [2024-11-27 09:44:57.866405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.512 [2024-11-27 09:44:57.866414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.512 [2024-11-27 09:44:57.866418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.512 [2024-11-27 09:44:57.867927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.512 [2024-11-27 09:44:57.868084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.512 [2024-11-27 09:44:57.868108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.512 [2024-11-27 09:44:57.868109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.084 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.084 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:43.084 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:43.084 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:43.084 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.345 [2024-11-27 09:44:58.585875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.345 Malloc1 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.345 [2024-11-27 09:44:58.708749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.345 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:43.345 { 00:12:43.345 "name": "Malloc1", 00:12:43.345 "aliases": [ 00:12:43.345 "ffd395c9-3b28-4d52-8a93-c282f4fe6fcf" 00:12:43.345 ], 00:12:43.345 "product_name": "Malloc disk", 00:12:43.345 "block_size": 512, 00:12:43.345 "num_blocks": 1048576, 00:12:43.345 "uuid": "ffd395c9-3b28-4d52-8a93-c282f4fe6fcf", 00:12:43.345 "assigned_rate_limits": { 00:12:43.345 "rw_ios_per_sec": 0, 00:12:43.345 "rw_mbytes_per_sec": 0, 00:12:43.345 "r_mbytes_per_sec": 0, 00:12:43.345 "w_mbytes_per_sec": 0 00:12:43.345 }, 00:12:43.345 "claimed": true, 00:12:43.345 "claim_type": "exclusive_write", 00:12:43.345 "zoned": false, 00:12:43.346 "supported_io_types": { 00:12:43.346 "read": true, 00:12:43.346 "write": true, 00:12:43.346 "unmap": true, 00:12:43.346 "flush": true, 00:12:43.346 "reset": true, 00:12:43.346 "nvme_admin": false, 00:12:43.346 "nvme_io": false, 00:12:43.346 "nvme_io_md": false, 00:12:43.346 "write_zeroes": true, 00:12:43.346 "zcopy": true, 00:12:43.346 "get_zone_info": false, 00:12:43.346 "zone_management": false, 00:12:43.346 "zone_append": false, 00:12:43.346 "compare": false, 00:12:43.346 "compare_and_write": false, 00:12:43.346 "abort": true, 00:12:43.346 "seek_hole": false, 00:12:43.346 "seek_data": false, 00:12:43.346 "copy": true, 00:12:43.346 "nvme_iov_md": false 00:12:43.346 }, 00:12:43.346 "memory_domains": [ 00:12:43.346 { 00:12:43.346 "dma_device_id": "system", 00:12:43.346 "dma_device_type": 1 00:12:43.346 }, 00:12:43.346 { 00:12:43.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.346 "dma_device_type": 2 00:12:43.346 } 00:12:43.346 ], 00:12:43.346 "driver_specific": {} 00:12:43.346 } 00:12:43.346 ]' 00:12:43.346 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:43.346 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:43.346 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:43.606 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:43.607 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:43.607 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:43.607 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:43.607 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.995 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.995 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:44.995 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.995 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:44.995 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:47.541 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:47.803 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:49.192 ************************************ 00:12:49.192 START TEST filesystem_in_capsule_ext4 00:12:49.192 ************************************ 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:49.192 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:49.192 mke2fs 1.47.0 (5-Feb-2023) 00:12:49.192 Discarding device blocks: 0/522240 done 00:12:49.192 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:49.192 Filesystem UUID: bcc09375-b15f-4c11-9be9-42ec6624b4a0 00:12:49.192 Superblock backups stored on blocks: 00:12:49.192 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:49.192 00:12:49.192 Allocating group tables: 0/64 done 00:12:49.192 Writing inode tables: 0/64 done 00:12:49.453 Creating journal (8192 blocks): done 00:12:51.782 Writing superblocks and filesystem accounting information: 0/64 done 00:12:51.782 00:12:51.782 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:51.782 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3763886 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:57.199 00:12:57.199 real 0m8.075s 00:12:57.199 user 0m0.034s 00:12:57.199 sys 0m0.071s 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:57.199 ************************************ 00:12:57.199 END TEST filesystem_in_capsule_ext4 00:12:57.199 ************************************ 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:57.199 ************************************ 00:12:57.199 START TEST filesystem_in_capsule_btrfs 00:12:57.199 ************************************ 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:57.199 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:57.199 btrfs-progs v6.8.1 00:12:57.199 See https://btrfs.readthedocs.io for more information. 00:12:57.199 00:12:57.199 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:57.199 NOTE: several default settings have changed in version 5.15, please make sure 00:12:57.199 this does not affect your deployments: 00:12:57.199 - DUP for metadata (-m dup) 00:12:57.199 - enabled no-holes (-O no-holes) 00:12:57.199 - enabled free-space-tree (-R free-space-tree) 00:12:57.199 00:12:57.199 Label: (null) 00:12:57.199 UUID: 9e140bee-f014-4411-a58a-9ed9b304d4b7 00:12:57.199 Node size: 16384 00:12:57.199 Sector size: 4096 (CPU page size: 4096) 00:12:57.199 Filesystem size: 510.00MiB 00:12:57.199 Block group profiles: 00:12:57.199 Data: single 8.00MiB 00:12:57.199 Metadata: DUP 32.00MiB 00:12:57.199 System: DUP 8.00MiB 00:12:57.199 SSD detected: yes 00:12:57.199 Zoned device: no 00:12:57.200 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:57.200 Checksum: crc32c 00:12:57.200 Number of devices: 1 00:12:57.200 Devices: 00:12:57.200 ID SIZE PATH 00:12:57.200 1 510.00MiB /dev/nvme0n1p1 00:12:57.200 00:12:57.200 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:57.200 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:57.779 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:57.779 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:57.779 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:57.779 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:57.779 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:57.779 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:57.779 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3763886 00:12:57.779 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:57.779 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:57.779 00:12:57.779 real 0m0.613s 00:12:57.779 user 0m0.048s 00:12:57.779 sys 0m0.102s 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:57.779 ************************************ 00:12:57.779 END TEST filesystem_in_capsule_btrfs 00:12:57.779 ************************************ 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:57.779 ************************************ 00:12:57.779 START TEST filesystem_in_capsule_xfs 00:12:57.779 ************************************ 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:57.779 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:57.779 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:57.779 = sectsz=512 attr=2, projid32bit=1 00:12:57.779 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:57.779 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:57.779 data = bsize=4096 blocks=130560, imaxpct=25 00:12:57.779 = sunit=0 swidth=0 blks 00:12:57.779 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:57.779 log =internal log bsize=4096 blocks=16384, version=2 00:12:57.779 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:57.779 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:58.721 Discarding blocks...Done. 00:12:58.721 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:58.721 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:01.260 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:01.260 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:01.260 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:01.260 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:01.260 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:01.260 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:01.260 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3763886 00:13:01.260 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:01.260 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:01.260 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:01.260 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:01.260 00:13:01.260 real 0m3.285s 00:13:01.260 user 0m0.026s 00:13:01.260 sys 0m0.080s 00:13:01.261 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.261 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:01.261 ************************************ 00:13:01.261 END TEST filesystem_in_capsule_xfs 00:13:01.261 ************************************ 00:13:01.261 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:01.261 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:01.521 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.521 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.521 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:01.521 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:01.521 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.521 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:01.521 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.782 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:01.782 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.782 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.782 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.782 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.782 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:01.782 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3763886 00:13:01.782 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3763886 ']' 00:13:01.782 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3763886 00:13:01.782 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:01.782 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.782 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3763886 00:13:01.782 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:01.782 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:01.782 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3763886' 00:13:01.782 killing process with pid 3763886 00:13:01.782 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3763886 00:13:01.782 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3763886 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:02.042 00:13:02.042 real 0m19.609s 00:13:02.042 user 1m17.546s 00:13:02.042 sys 0m1.437s 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.042 ************************************ 00:13:02.042 END TEST nvmf_filesystem_in_capsule 00:13:02.042 ************************************ 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:02.042 rmmod nvme_tcp 00:13:02.042 rmmod nvme_fabrics 00:13:02.042 rmmod nvme_keyring 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.042 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.587 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:04.587 00:13:04.587 real 0m49.370s 00:13:04.587 user 2m36.222s 00:13:04.587 sys 0m8.911s 00:13:04.587 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:04.588 ************************************ 00:13:04.588 END TEST nvmf_filesystem 00:13:04.588 ************************************ 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:04.588 ************************************ 00:13:04.588 START TEST nvmf_target_discovery 00:13:04.588 ************************************ 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:04.588 * Looking for test storage... 00:13:04.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:04.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.588 --rc genhtml_branch_coverage=1 00:13:04.588 --rc genhtml_function_coverage=1 00:13:04.588 --rc genhtml_legend=1 00:13:04.588 --rc geninfo_all_blocks=1 00:13:04.588 --rc geninfo_unexecuted_blocks=1 00:13:04.588 00:13:04.588 ' 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:04.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.588 --rc genhtml_branch_coverage=1 00:13:04.588 --rc genhtml_function_coverage=1 00:13:04.588 --rc genhtml_legend=1 00:13:04.588 --rc geninfo_all_blocks=1 00:13:04.588 --rc geninfo_unexecuted_blocks=1 00:13:04.588 00:13:04.588 ' 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:04.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.588 --rc genhtml_branch_coverage=1 00:13:04.588 --rc genhtml_function_coverage=1 00:13:04.588 --rc genhtml_legend=1 00:13:04.588 --rc geninfo_all_blocks=1 00:13:04.588 --rc geninfo_unexecuted_blocks=1 00:13:04.588 00:13:04.588 ' 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:04.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.588 --rc genhtml_branch_coverage=1 00:13:04.588 --rc genhtml_function_coverage=1 00:13:04.588 --rc genhtml_legend=1 00:13:04.588 --rc geninfo_all_blocks=1 00:13:04.588 --rc geninfo_unexecuted_blocks=1 00:13:04.588 00:13:04.588 ' 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.588 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:04.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:04.589 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:12.734 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:12.735 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:12.735 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:12.735 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:12.735 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.735 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:12.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:13:12.735 00:13:12.735 --- 10.0.0.2 ping statistics --- 00:13:12.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.735 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:13:12.735 00:13:12.735 --- 10.0.0.1 ping statistics --- 00:13:12.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.735 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3772669 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3772669 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3772669 ']' 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.735 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.735 [2024-11-27 09:45:27.389316] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:13:12.735 [2024-11-27 09:45:27.389388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.735 [2024-11-27 09:45:27.487561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.735 [2024-11-27 09:45:27.541085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.735 [2024-11-27 09:45:27.541139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.735 [2024-11-27 09:45:27.541148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.735 [2024-11-27 09:45:27.541155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.736 [2024-11-27 09:45:27.541172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.736 [2024-11-27 09:45:27.543202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.736 [2024-11-27 09:45:27.543421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.736 [2024-11-27 09:45:27.543578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.736 [2024-11-27 09:45:27.543583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.998 [2024-11-27 09:45:28.271975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.998 Null1 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.998 [2024-11-27 09:45:28.332450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.998 Null2 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.998 Null3 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.998 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.999 Null4 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.999 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.261 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:13:13.261 00:13:13.261 Discovery Log Number of Records 6, Generation counter 6 00:13:13.261 =====Discovery Log Entry 0====== 00:13:13.261 trtype: tcp 00:13:13.261 adrfam: ipv4 00:13:13.261 subtype: current discovery subsystem 00:13:13.261 treq: not required 00:13:13.261 portid: 0 00:13:13.261 trsvcid: 4420 00:13:13.261 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:13.261 traddr: 10.0.0.2 00:13:13.261 eflags: explicit discovery connections, duplicate discovery information 00:13:13.261 sectype: none 00:13:13.261 =====Discovery Log Entry 1====== 00:13:13.261 trtype: tcp 00:13:13.261 adrfam: ipv4 00:13:13.261 subtype: nvme subsystem 00:13:13.261 treq: not required 00:13:13.261 portid: 0 00:13:13.261 trsvcid: 4420 00:13:13.261 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:13.261 traddr: 10.0.0.2 00:13:13.261 eflags: none 00:13:13.261 sectype: none 00:13:13.261 =====Discovery Log Entry 2====== 00:13:13.261 trtype: tcp 00:13:13.261 adrfam: ipv4 00:13:13.261 subtype: nvme subsystem 00:13:13.261 treq: not required 00:13:13.261 portid: 0 00:13:13.261 trsvcid: 4420 00:13:13.261 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:13.261 traddr: 10.0.0.2 00:13:13.261 eflags: none 00:13:13.261 sectype: none 00:13:13.261 =====Discovery Log Entry 3====== 00:13:13.261 trtype: tcp 00:13:13.261 adrfam: ipv4 00:13:13.261 subtype: nvme subsystem 00:13:13.261 treq: not required 00:13:13.261 portid: 0 00:13:13.261 trsvcid: 4420 00:13:13.261 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:13.261 traddr: 10.0.0.2 00:13:13.261 eflags: none 00:13:13.261 sectype: none 00:13:13.261 =====Discovery Log Entry 4====== 00:13:13.261 trtype: tcp 00:13:13.261 adrfam: ipv4 00:13:13.261 subtype: nvme subsystem 00:13:13.261 treq: not required 00:13:13.261 portid: 0 00:13:13.261 trsvcid: 4420 00:13:13.261 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:13.261 traddr: 10.0.0.2 00:13:13.261 eflags: none 00:13:13.261 sectype: none 00:13:13.261 =====Discovery Log Entry 5====== 00:13:13.261 trtype: tcp 00:13:13.261 adrfam: ipv4 00:13:13.261 subtype: discovery subsystem referral 00:13:13.262 treq: not required 00:13:13.262 portid: 0 00:13:13.262 trsvcid: 4430 00:13:13.262 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:13.262 traddr: 10.0.0.2 00:13:13.262 eflags: none 00:13:13.262 sectype: none 00:13:13.262 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:13.262 Perform nvmf subsystem discovery via RPC 00:13:13.262 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:13.262 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.262 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.262 [ 00:13:13.262 { 00:13:13.262 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:13.262 "subtype": "Discovery", 00:13:13.262 "listen_addresses": [ 00:13:13.262 { 00:13:13.262 "trtype": "TCP", 00:13:13.262 "adrfam": "IPv4", 00:13:13.262 "traddr": "10.0.0.2", 00:13:13.262 "trsvcid": "4420" 00:13:13.262 } 00:13:13.262 ], 00:13:13.262 "allow_any_host": true, 00:13:13.262 "hosts": [] 00:13:13.262 }, 00:13:13.262 { 00:13:13.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:13.262 "subtype": "NVMe", 00:13:13.262 "listen_addresses": [ 00:13:13.262 { 00:13:13.262 "trtype": "TCP", 00:13:13.262 "adrfam": "IPv4", 00:13:13.262 "traddr": "10.0.0.2", 00:13:13.262 "trsvcid": "4420" 00:13:13.262 } 00:13:13.262 ], 00:13:13.262 "allow_any_host": true, 00:13:13.262 "hosts": [], 00:13:13.262 "serial_number": "SPDK00000000000001", 00:13:13.262 "model_number": "SPDK bdev Controller", 00:13:13.262 "max_namespaces": 32, 00:13:13.262 "min_cntlid": 1, 00:13:13.262 "max_cntlid": 65519, 00:13:13.262 "namespaces": [ 00:13:13.262 { 00:13:13.262 "nsid": 1, 00:13:13.262 "bdev_name": "Null1", 00:13:13.262 "name": "Null1", 00:13:13.262 "nguid": "FBA435898F3E41F2A1D3975D0FC4B580", 00:13:13.262 "uuid": "fba43589-8f3e-41f2-a1d3-975d0fc4b580" 00:13:13.262 } 00:13:13.262 ] 00:13:13.262 }, 00:13:13.262 { 00:13:13.262 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:13.262 "subtype": "NVMe", 00:13:13.262 "listen_addresses": [ 00:13:13.262 { 00:13:13.262 "trtype": "TCP", 00:13:13.262 "adrfam": "IPv4", 00:13:13.262 "traddr": "10.0.0.2", 00:13:13.262 "trsvcid": "4420" 00:13:13.262 } 00:13:13.262 ], 00:13:13.262 "allow_any_host": true, 00:13:13.262 "hosts": [], 00:13:13.262 "serial_number": "SPDK00000000000002", 00:13:13.262 "model_number": "SPDK bdev Controller", 00:13:13.262 "max_namespaces": 32, 00:13:13.262 "min_cntlid": 1, 00:13:13.262 "max_cntlid": 65519, 00:13:13.262 "namespaces": [ 00:13:13.262 { 00:13:13.262 "nsid": 1, 00:13:13.262 "bdev_name": "Null2", 00:13:13.262 "name": "Null2", 00:13:13.262 "nguid": "81BBBCEE82834B6793D12E522B936567", 00:13:13.262 "uuid": "81bbbcee-8283-4b67-93d1-2e522b936567" 00:13:13.262 } 00:13:13.262 ] 00:13:13.262 }, 00:13:13.262 { 00:13:13.262 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:13.262 "subtype": "NVMe", 00:13:13.262 "listen_addresses": [ 00:13:13.262 { 00:13:13.262 "trtype": "TCP", 00:13:13.262 "adrfam": "IPv4", 00:13:13.262 "traddr": "10.0.0.2", 00:13:13.262 "trsvcid": "4420" 00:13:13.262 } 00:13:13.262 ], 00:13:13.262 "allow_any_host": true, 00:13:13.262 "hosts": [], 00:13:13.262 "serial_number": "SPDK00000000000003", 00:13:13.262 "model_number": "SPDK bdev Controller", 00:13:13.262 "max_namespaces": 32, 00:13:13.262 "min_cntlid": 1, 00:13:13.262 "max_cntlid": 65519, 00:13:13.262 "namespaces": [ 00:13:13.262 { 00:13:13.262 "nsid": 1, 00:13:13.262 "bdev_name": "Null3", 00:13:13.262 "name": "Null3", 00:13:13.262 "nguid": "8C162F4B1AA047DCB898A6FBECFEF54E", 00:13:13.262 "uuid": "8c162f4b-1aa0-47dc-b898-a6fbecfef54e" 00:13:13.262 } 00:13:13.262 ] 00:13:13.262 }, 00:13:13.262 { 00:13:13.262 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:13.262 "subtype": "NVMe", 00:13:13.262 "listen_addresses": [ 00:13:13.262 { 00:13:13.262 "trtype": "TCP", 00:13:13.262 "adrfam": "IPv4", 00:13:13.262 "traddr": "10.0.0.2", 00:13:13.262 "trsvcid": "4420" 00:13:13.262 } 00:13:13.262 ], 00:13:13.262 "allow_any_host": true, 00:13:13.262 "hosts": [], 00:13:13.262 "serial_number": "SPDK00000000000004", 00:13:13.262 "model_number": "SPDK bdev Controller", 00:13:13.262 "max_namespaces": 32, 00:13:13.262 "min_cntlid": 1, 00:13:13.262 "max_cntlid": 65519, 00:13:13.262 "namespaces": [ 00:13:13.262 { 00:13:13.262 "nsid": 1, 00:13:13.262 "bdev_name": "Null4", 00:13:13.262 "name": "Null4", 00:13:13.262 "nguid": "89CE23804EF549E5901FFD0068F46885", 00:13:13.262 "uuid": "89ce2380-4ef5-49e5-901f-fd0068f46885" 00:13:13.262 } 00:13:13.262 ] 00:13:13.262 } 00:13:13.262 ] 00:13:13.262 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:13.524 rmmod nvme_tcp 00:13:13.524 rmmod nvme_fabrics 00:13:13.524 rmmod nvme_keyring 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3772669 ']' 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3772669 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3772669 ']' 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3772669 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.524 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3772669 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3772669' 00:13:13.785 killing process with pid 3772669 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3772669 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3772669 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.785 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:16.331 00:13:16.331 real 0m11.729s 00:13:16.331 user 0m9.025s 00:13:16.331 sys 0m6.133s 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:16.331 ************************************ 00:13:16.331 END TEST nvmf_target_discovery 00:13:16.331 ************************************ 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:16.331 ************************************ 00:13:16.331 START TEST nvmf_referrals 00:13:16.331 ************************************ 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:16.331 * Looking for test storage... 00:13:16.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:16.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.331 --rc genhtml_branch_coverage=1 00:13:16.331 --rc genhtml_function_coverage=1 00:13:16.331 --rc genhtml_legend=1 00:13:16.331 --rc geninfo_all_blocks=1 00:13:16.331 --rc geninfo_unexecuted_blocks=1 00:13:16.331 00:13:16.331 ' 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:16.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.331 --rc genhtml_branch_coverage=1 00:13:16.331 --rc genhtml_function_coverage=1 00:13:16.331 --rc genhtml_legend=1 00:13:16.331 --rc geninfo_all_blocks=1 00:13:16.331 --rc geninfo_unexecuted_blocks=1 00:13:16.331 00:13:16.331 ' 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:16.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.331 --rc genhtml_branch_coverage=1 00:13:16.331 --rc genhtml_function_coverage=1 00:13:16.331 --rc genhtml_legend=1 00:13:16.331 --rc geninfo_all_blocks=1 00:13:16.331 --rc geninfo_unexecuted_blocks=1 00:13:16.331 00:13:16.331 ' 00:13:16.331 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:16.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.331 --rc genhtml_branch_coverage=1 00:13:16.331 --rc genhtml_function_coverage=1 00:13:16.331 --rc genhtml_legend=1 00:13:16.331 --rc geninfo_all_blocks=1 00:13:16.331 --rc geninfo_unexecuted_blocks=1 00:13:16.331 00:13:16.332 ' 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:16.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:16.332 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:24.475 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:24.475 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:24.475 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:24.475 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.475 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:24.476 09:45:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:24.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:13:24.476 00:13:24.476 --- 10.0.0.2 ping statistics --- 00:13:24.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.476 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:13:24.476 00:13:24.476 --- 10.0.0.1 ping statistics --- 00:13:24.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.476 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3777276 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3777276 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3777276 ']' 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.476 [2024-11-27 09:45:39.236512] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:13:24.476 [2024-11-27 09:45:39.236574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.476 [2024-11-27 09:45:39.310519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.476 [2024-11-27 09:45:39.358450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.476 [2024-11-27 09:45:39.358506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.476 [2024-11-27 09:45:39.358513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.476 [2024-11-27 09:45:39.358518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.476 [2024-11-27 09:45:39.358523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.476 [2024-11-27 09:45:39.360320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.476 [2024-11-27 09:45:39.360601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.476 [2024-11-27 09:45:39.360764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.476 [2024-11-27 09:45:39.360765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.476 [2024-11-27 09:45:39.525799] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.476 [2024-11-27 09:45:39.542209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.476 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.477 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.738 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:24.738 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:24.738 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:24.738 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:24.738 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:24.738 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:24.738 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:24.738 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:25.001 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:25.264 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:25.264 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:25.264 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:25.264 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:25.264 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:25.264 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:25.525 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:25.787 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:25.787 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:25.787 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:25.787 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:25.787 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:25.787 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:25.787 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.048 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:26.309 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:26.309 rmmod nvme_tcp 00:13:26.309 rmmod nvme_fabrics 00:13:26.571 rmmod nvme_keyring 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3777276 ']' 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3777276 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3777276 ']' 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3777276 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3777276 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3777276' 00:13:26.571 killing process with pid 3777276 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3777276 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3777276 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:26.571 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:26.571 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:26.571 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:26.571 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.571 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.571 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:29.121 00:13:29.121 real 0m12.712s 00:13:29.121 user 0m13.554s 00:13:29.121 sys 0m6.574s 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.121 ************************************ 00:13:29.121 END TEST nvmf_referrals 00:13:29.121 ************************************ 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.121 ************************************ 00:13:29.121 START TEST nvmf_connect_disconnect 00:13:29.121 ************************************ 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:29.121 * Looking for test storage... 00:13:29.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.121 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:29.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.121 --rc genhtml_branch_coverage=1 00:13:29.121 --rc genhtml_function_coverage=1 00:13:29.121 --rc genhtml_legend=1 00:13:29.121 --rc geninfo_all_blocks=1 00:13:29.121 --rc geninfo_unexecuted_blocks=1 00:13:29.121 00:13:29.121 ' 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:29.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.122 --rc genhtml_branch_coverage=1 00:13:29.122 --rc genhtml_function_coverage=1 00:13:29.122 --rc genhtml_legend=1 00:13:29.122 --rc geninfo_all_blocks=1 00:13:29.122 --rc geninfo_unexecuted_blocks=1 00:13:29.122 00:13:29.122 ' 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:29.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.122 --rc genhtml_branch_coverage=1 00:13:29.122 --rc genhtml_function_coverage=1 00:13:29.122 --rc genhtml_legend=1 00:13:29.122 --rc geninfo_all_blocks=1 00:13:29.122 --rc geninfo_unexecuted_blocks=1 00:13:29.122 00:13:29.122 ' 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:29.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.122 --rc genhtml_branch_coverage=1 00:13:29.122 --rc genhtml_function_coverage=1 00:13:29.122 --rc genhtml_legend=1 00:13:29.122 --rc geninfo_all_blocks=1 00:13:29.122 --rc geninfo_unexecuted_blocks=1 00:13:29.122 00:13:29.122 ' 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.122 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:29.123 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:29.123 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:29.123 09:45:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.267 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.267 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:37.268 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:37.268 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:37.268 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:37.268 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:37.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:13:37.268 00:13:37.268 --- 10.0.0.2 ping statistics --- 00:13:37.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.268 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:13:37.268 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:13:37.269 00:13:37.269 --- 10.0.0.1 ping statistics --- 00:13:37.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.269 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3782046 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3782046 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3782046 ']' 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.269 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.269 [2024-11-27 09:45:51.958394] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:13:37.269 [2024-11-27 09:45:51.958461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.269 [2024-11-27 09:45:52.057846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.269 [2024-11-27 09:45:52.111232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.269 [2024-11-27 09:45:52.111284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.269 [2024-11-27 09:45:52.111294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.269 [2024-11-27 09:45:52.111301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.269 [2024-11-27 09:45:52.111307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.269 [2024-11-27 09:45:52.113635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.269 [2024-11-27 09:45:52.113794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.269 [2024-11-27 09:45:52.113952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.269 [2024-11-27 09:45:52.113952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.531 [2024-11-27 09:45:52.843180] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:37.531 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.532 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.532 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.532 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:37.532 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.532 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.532 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.532 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.532 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.532 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.532 [2024-11-27 09:45:52.921539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.532 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.532 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:37.532 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:37.532 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:41.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:55.852 rmmod nvme_tcp 00:13:55.852 rmmod nvme_fabrics 00:13:55.852 rmmod nvme_keyring 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3782046 ']' 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3782046 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3782046 ']' 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3782046 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.852 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3782046 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3782046' 00:13:56.114 killing process with pid 3782046 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3782046 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3782046 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.114 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:58.662 00:13:58.662 real 0m29.382s 00:13:58.662 user 1m19.242s 00:13:58.662 sys 0m7.135s 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:58.662 ************************************ 00:13:58.662 END TEST nvmf_connect_disconnect 00:13:58.662 ************************************ 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.662 ************************************ 00:13:58.662 START TEST nvmf_multitarget 00:13:58.662 ************************************ 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:58.662 * Looking for test storage... 00:13:58.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:58.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.662 --rc genhtml_branch_coverage=1 00:13:58.662 --rc genhtml_function_coverage=1 00:13:58.662 --rc genhtml_legend=1 00:13:58.662 --rc geninfo_all_blocks=1 00:13:58.662 --rc geninfo_unexecuted_blocks=1 00:13:58.662 00:13:58.662 ' 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:58.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.662 --rc genhtml_branch_coverage=1 00:13:58.662 --rc genhtml_function_coverage=1 00:13:58.662 --rc genhtml_legend=1 00:13:58.662 --rc geninfo_all_blocks=1 00:13:58.662 --rc geninfo_unexecuted_blocks=1 00:13:58.662 00:13:58.662 ' 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:58.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.662 --rc genhtml_branch_coverage=1 00:13:58.662 --rc genhtml_function_coverage=1 00:13:58.662 --rc genhtml_legend=1 00:13:58.662 --rc geninfo_all_blocks=1 00:13:58.662 --rc geninfo_unexecuted_blocks=1 00:13:58.662 00:13:58.662 ' 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:58.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.662 --rc genhtml_branch_coverage=1 00:13:58.662 --rc genhtml_function_coverage=1 00:13:58.662 --rc genhtml_legend=1 00:13:58.662 --rc geninfo_all_blocks=1 00:13:58.662 --rc geninfo_unexecuted_blocks=1 00:13:58.662 00:13:58.662 ' 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.662 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:58.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:58.663 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:06.811 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:06.811 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:06.811 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:06.811 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:06.811 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:06.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:14:06.812 00:14:06.812 --- 10.0.0.2 ping statistics --- 00:14:06.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.812 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:14:06.812 00:14:06.812 --- 10.0.0.1 ping statistics --- 00:14:06.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.812 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3790176 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3790176 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3790176 ']' 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.812 09:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:06.812 [2024-11-27 09:46:21.472618] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:14:06.812 [2024-11-27 09:46:21.472689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.812 [2024-11-27 09:46:21.572882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.812 [2024-11-27 09:46:21.625951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.812 [2024-11-27 09:46:21.626005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.812 [2024-11-27 09:46:21.626014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.812 [2024-11-27 09:46:21.626021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.812 [2024-11-27 09:46:21.626028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.812 [2024-11-27 09:46:21.628481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.812 [2024-11-27 09:46:21.628649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.812 [2024-11-27 09:46:21.628813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.812 [2024-11-27 09:46:21.628814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.074 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.074 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:14:07.074 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:07.075 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:07.075 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:07.075 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.075 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:07.075 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:07.075 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:07.075 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:07.075 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:07.336 "nvmf_tgt_1" 00:14:07.336 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:07.336 "nvmf_tgt_2" 00:14:07.336 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:07.336 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:07.623 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:07.623 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:07.623 true 00:14:07.623 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:07.623 true 00:14:07.623 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:07.623 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:07.943 rmmod nvme_tcp 00:14:07.943 rmmod nvme_fabrics 00:14:07.943 rmmod nvme_keyring 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3790176 ']' 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3790176 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3790176 ']' 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3790176 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3790176 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3790176' 00:14:07.943 killing process with pid 3790176 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3790176 00:14:07.943 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3790176 00:14:08.273 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:08.273 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:08.273 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:08.273 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:08.273 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:08.273 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:08.273 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:08.273 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:08.273 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:08.273 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.273 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.273 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.197 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:10.197 00:14:10.197 real 0m11.913s 00:14:10.197 user 0m10.333s 00:14:10.197 sys 0m6.225s 00:14:10.197 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.197 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:10.197 ************************************ 00:14:10.197 END TEST nvmf_multitarget 00:14:10.197 ************************************ 00:14:10.197 09:46:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:10.197 09:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:10.197 09:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.197 09:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.197 ************************************ 00:14:10.197 START TEST nvmf_rpc 00:14:10.197 ************************************ 00:14:10.197 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:10.460 * Looking for test storage... 00:14:10.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.460 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:10.460 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:10.460 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:10.460 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:10.460 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.460 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.460 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.460 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.460 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.460 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:10.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.461 --rc genhtml_branch_coverage=1 00:14:10.461 --rc genhtml_function_coverage=1 00:14:10.461 --rc genhtml_legend=1 00:14:10.461 --rc geninfo_all_blocks=1 00:14:10.461 --rc geninfo_unexecuted_blocks=1 00:14:10.461 00:14:10.461 ' 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:10.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.461 --rc genhtml_branch_coverage=1 00:14:10.461 --rc genhtml_function_coverage=1 00:14:10.461 --rc genhtml_legend=1 00:14:10.461 --rc geninfo_all_blocks=1 00:14:10.461 --rc geninfo_unexecuted_blocks=1 00:14:10.461 00:14:10.461 ' 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:10.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.461 --rc genhtml_branch_coverage=1 00:14:10.461 --rc genhtml_function_coverage=1 00:14:10.461 --rc genhtml_legend=1 00:14:10.461 --rc geninfo_all_blocks=1 00:14:10.461 --rc geninfo_unexecuted_blocks=1 00:14:10.461 00:14:10.461 ' 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:10.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.461 --rc genhtml_branch_coverage=1 00:14:10.461 --rc genhtml_function_coverage=1 00:14:10.461 --rc genhtml_legend=1 00:14:10.461 --rc geninfo_all_blocks=1 00:14:10.461 --rc geninfo_unexecuted_blocks=1 00:14:10.461 00:14:10.461 ' 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:10.461 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:10.462 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.462 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.462 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.462 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:10.462 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:10.462 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:10.462 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.598 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.598 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:18.598 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:18.598 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:18.598 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:18.598 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:18.599 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:18.599 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:18.599 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:18.599 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:18.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:14:18.599 00:14:18.599 --- 10.0.0.2 ping statistics --- 00:14:18.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.599 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:14:18.599 00:14:18.599 --- 10.0.0.1 ping statistics --- 00:14:18.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.599 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:14:18.599 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3794785 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3794785 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3794785 ']' 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.600 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.600 [2024-11-27 09:46:33.517618] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:14:18.600 [2024-11-27 09:46:33.517688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.600 [2024-11-27 09:46:33.617311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:18.600 [2024-11-27 09:46:33.671068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.600 [2024-11-27 09:46:33.671122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.600 [2024-11-27 09:46:33.671131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.600 [2024-11-27 09:46:33.671138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.600 [2024-11-27 09:46:33.671145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.600 [2024-11-27 09:46:33.673553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.600 [2024-11-27 09:46:33.673713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.600 [2024-11-27 09:46:33.673873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.600 [2024-11-27 09:46:33.673874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:19.170 "tick_rate": 2400000000, 00:14:19.170 "poll_groups": [ 00:14:19.170 { 00:14:19.170 "name": "nvmf_tgt_poll_group_000", 00:14:19.170 "admin_qpairs": 0, 00:14:19.170 "io_qpairs": 0, 00:14:19.170 "current_admin_qpairs": 0, 00:14:19.170 "current_io_qpairs": 0, 00:14:19.170 "pending_bdev_io": 0, 00:14:19.170 "completed_nvme_io": 0, 00:14:19.170 "transports": [] 00:14:19.170 }, 00:14:19.170 { 00:14:19.170 "name": "nvmf_tgt_poll_group_001", 00:14:19.170 "admin_qpairs": 0, 00:14:19.170 "io_qpairs": 0, 00:14:19.170 "current_admin_qpairs": 0, 00:14:19.170 "current_io_qpairs": 0, 00:14:19.170 "pending_bdev_io": 0, 00:14:19.170 "completed_nvme_io": 0, 00:14:19.170 "transports": [] 00:14:19.170 }, 00:14:19.170 { 00:14:19.170 "name": "nvmf_tgt_poll_group_002", 00:14:19.170 "admin_qpairs": 0, 00:14:19.170 "io_qpairs": 0, 00:14:19.170 "current_admin_qpairs": 0, 00:14:19.170 "current_io_qpairs": 0, 00:14:19.170 "pending_bdev_io": 0, 00:14:19.170 "completed_nvme_io": 0, 00:14:19.170 "transports": [] 00:14:19.170 }, 00:14:19.170 { 00:14:19.170 "name": "nvmf_tgt_poll_group_003", 00:14:19.170 "admin_qpairs": 0, 00:14:19.170 "io_qpairs": 0, 00:14:19.170 "current_admin_qpairs": 0, 00:14:19.170 "current_io_qpairs": 0, 00:14:19.170 "pending_bdev_io": 0, 00:14:19.170 "completed_nvme_io": 0, 00:14:19.170 "transports": [] 00:14:19.170 } 00:14:19.170 ] 00:14:19.170 }' 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.170 [2024-11-27 09:46:34.510803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:19.170 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.171 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.171 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.171 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:19.171 "tick_rate": 2400000000, 00:14:19.171 "poll_groups": [ 00:14:19.171 { 00:14:19.171 "name": "nvmf_tgt_poll_group_000", 00:14:19.171 "admin_qpairs": 0, 00:14:19.171 "io_qpairs": 0, 00:14:19.171 "current_admin_qpairs": 0, 00:14:19.171 "current_io_qpairs": 0, 00:14:19.171 "pending_bdev_io": 0, 00:14:19.171 "completed_nvme_io": 0, 00:14:19.171 "transports": [ 00:14:19.171 { 00:14:19.171 "trtype": "TCP" 00:14:19.171 } 00:14:19.171 ] 00:14:19.171 }, 00:14:19.171 { 00:14:19.171 "name": "nvmf_tgt_poll_group_001", 00:14:19.171 "admin_qpairs": 0, 00:14:19.171 "io_qpairs": 0, 00:14:19.171 "current_admin_qpairs": 0, 00:14:19.171 "current_io_qpairs": 0, 00:14:19.171 "pending_bdev_io": 0, 00:14:19.171 "completed_nvme_io": 0, 00:14:19.171 "transports": [ 00:14:19.171 { 00:14:19.171 "trtype": "TCP" 00:14:19.171 } 00:14:19.171 ] 00:14:19.171 }, 00:14:19.171 { 00:14:19.171 "name": "nvmf_tgt_poll_group_002", 00:14:19.171 "admin_qpairs": 0, 00:14:19.171 "io_qpairs": 0, 00:14:19.171 "current_admin_qpairs": 0, 00:14:19.171 "current_io_qpairs": 0, 00:14:19.171 "pending_bdev_io": 0, 00:14:19.171 "completed_nvme_io": 0, 00:14:19.171 "transports": [ 00:14:19.171 { 00:14:19.171 "trtype": "TCP" 00:14:19.171 } 00:14:19.171 ] 00:14:19.171 }, 00:14:19.171 { 00:14:19.171 "name": "nvmf_tgt_poll_group_003", 00:14:19.171 "admin_qpairs": 0, 00:14:19.171 "io_qpairs": 0, 00:14:19.171 "current_admin_qpairs": 0, 00:14:19.171 "current_io_qpairs": 0, 00:14:19.171 "pending_bdev_io": 0, 00:14:19.171 "completed_nvme_io": 0, 00:14:19.171 "transports": [ 00:14:19.171 { 00:14:19.171 "trtype": "TCP" 00:14:19.171 } 00:14:19.171 ] 00:14:19.171 } 00:14:19.171 ] 00:14:19.171 }' 00:14:19.171 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:19.171 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:19.171 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:19.171 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:19.171 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:19.171 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:19.171 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:19.171 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:19.171 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.433 Malloc1 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.433 [2024-11-27 09:46:34.729133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:19.433 [2024-11-27 09:46:34.766028] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:14:19.433 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:19.433 could not add new controller: failed to write to nvme-fabrics device 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:19.433 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:19.434 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:19.434 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:19.434 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:19.434 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.434 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.434 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.434 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:21.354 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:21.354 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:21.354 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.354 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:21.354 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.264 [2024-11-27 09:46:38.522566] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:14:23.264 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:23.264 could not add new controller: failed to write to nvme-fabrics device 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.264 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:24.648 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:24.648 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:24.648 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.648 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:24.648 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.194 [2024-11-27 09:46:42.240226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.194 09:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:28.575 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:28.575 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:28.575 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.575 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:28.575 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.488 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.749 [2024-11-27 09:46:45.991968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.749 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.749 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.749 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:30.749 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.749 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.749 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.749 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:32.134 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:32.134 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:32.134 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.134 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:32.134 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:34.047 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:34.047 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.308 [2024-11-27 09:46:49.709131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.308 09:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:36.220 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:36.220 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:36.220 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.220 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:36.220 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:38.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.131 [2024-11-27 09:46:53.438072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.131 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:40.043 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:40.043 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:40.043 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.043 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:40.043 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:41.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.954 [2024-11-27 09:46:57.190227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.954 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:41.955 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.955 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.955 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.955 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:41.955 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.955 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.955 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.955 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:43.339 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:43.339 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:43.339 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.339 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:43.339 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:45.251 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:45.251 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:45.251 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.512 [2024-11-27 09:47:00.911009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.512 [2024-11-27 09:47:00.971136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.512 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:45.774 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.774 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.774 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.774 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 [2024-11-27 09:47:01.039344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 [2024-11-27 09:47:01.111568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.774 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.775 [2024-11-27 09:47:01.183806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.775 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:46.037 "tick_rate": 2400000000, 00:14:46.037 "poll_groups": [ 00:14:46.037 { 00:14:46.037 "name": "nvmf_tgt_poll_group_000", 00:14:46.037 "admin_qpairs": 0, 00:14:46.037 "io_qpairs": 224, 00:14:46.037 "current_admin_qpairs": 0, 00:14:46.037 "current_io_qpairs": 0, 00:14:46.037 "pending_bdev_io": 0, 00:14:46.037 "completed_nvme_io": 518, 00:14:46.037 "transports": [ 00:14:46.037 { 00:14:46.037 "trtype": "TCP" 00:14:46.037 } 00:14:46.037 ] 00:14:46.037 }, 00:14:46.037 { 00:14:46.037 "name": "nvmf_tgt_poll_group_001", 00:14:46.037 "admin_qpairs": 1, 00:14:46.037 "io_qpairs": 223, 00:14:46.037 "current_admin_qpairs": 0, 00:14:46.037 "current_io_qpairs": 0, 00:14:46.037 "pending_bdev_io": 0, 00:14:46.037 "completed_nvme_io": 224, 00:14:46.037 "transports": [ 00:14:46.037 { 00:14:46.037 "trtype": "TCP" 00:14:46.037 } 00:14:46.037 ] 00:14:46.037 }, 00:14:46.037 { 00:14:46.037 "name": "nvmf_tgt_poll_group_002", 00:14:46.037 "admin_qpairs": 6, 00:14:46.037 "io_qpairs": 218, 00:14:46.037 "current_admin_qpairs": 0, 00:14:46.037 "current_io_qpairs": 0, 00:14:46.037 "pending_bdev_io": 0, 00:14:46.037 "completed_nvme_io": 223, 00:14:46.037 "transports": [ 00:14:46.037 { 00:14:46.037 "trtype": "TCP" 00:14:46.037 } 00:14:46.037 ] 00:14:46.037 }, 00:14:46.037 { 00:14:46.037 "name": "nvmf_tgt_poll_group_003", 00:14:46.037 "admin_qpairs": 0, 00:14:46.037 "io_qpairs": 224, 00:14:46.037 "current_admin_qpairs": 0, 00:14:46.037 "current_io_qpairs": 0, 00:14:46.037 "pending_bdev_io": 0, 00:14:46.037 "completed_nvme_io": 274, 00:14:46.037 "transports": [ 00:14:46.037 { 00:14:46.037 "trtype": "TCP" 00:14:46.037 } 00:14:46.037 ] 00:14:46.037 } 00:14:46.037 ] 00:14:46.037 }' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:46.037 rmmod nvme_tcp 00:14:46.037 rmmod nvme_fabrics 00:14:46.037 rmmod nvme_keyring 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3794785 ']' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3794785 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3794785 ']' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3794785 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3794785 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3794785' 00:14:46.037 killing process with pid 3794785 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3794785 00:14:46.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3794785 00:14:46.298 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:46.298 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:46.298 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:46.298 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:46.298 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:46.298 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:46.298 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:46.298 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:46.298 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:46.298 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.298 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.298 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.214 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:48.475 00:14:48.475 real 0m38.061s 00:14:48.475 user 1m53.723s 00:14:48.475 sys 0m7.974s 00:14:48.475 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.475 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.475 ************************************ 00:14:48.475 END TEST nvmf_rpc 00:14:48.475 ************************************ 00:14:48.475 09:47:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:48.475 09:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:48.475 09:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.475 09:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.475 ************************************ 00:14:48.475 START TEST nvmf_invalid 00:14:48.475 ************************************ 00:14:48.475 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:48.475 * Looking for test storage... 00:14:48.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.475 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:48.475 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:48.475 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:48.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.738 --rc genhtml_branch_coverage=1 00:14:48.738 --rc genhtml_function_coverage=1 00:14:48.738 --rc genhtml_legend=1 00:14:48.738 --rc geninfo_all_blocks=1 00:14:48.738 --rc geninfo_unexecuted_blocks=1 00:14:48.738 00:14:48.738 ' 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:48.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.738 --rc genhtml_branch_coverage=1 00:14:48.738 --rc genhtml_function_coverage=1 00:14:48.738 --rc genhtml_legend=1 00:14:48.738 --rc geninfo_all_blocks=1 00:14:48.738 --rc geninfo_unexecuted_blocks=1 00:14:48.738 00:14:48.738 ' 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:48.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.738 --rc genhtml_branch_coverage=1 00:14:48.738 --rc genhtml_function_coverage=1 00:14:48.738 --rc genhtml_legend=1 00:14:48.738 --rc geninfo_all_blocks=1 00:14:48.738 --rc geninfo_unexecuted_blocks=1 00:14:48.738 00:14:48.738 ' 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:48.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.738 --rc genhtml_branch_coverage=1 00:14:48.738 --rc genhtml_function_coverage=1 00:14:48.738 --rc genhtml_legend=1 00:14:48.738 --rc geninfo_all_blocks=1 00:14:48.738 --rc geninfo_unexecuted_blocks=1 00:14:48.738 00:14:48.738 ' 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.738 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.739 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:48.739 09:47:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:56.882 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:56.882 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:56.882 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:56.883 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:56.883 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:56.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:14:56.883 00:14:56.883 --- 10.0.0.2 ping statistics --- 00:14:56.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.883 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:14:56.883 00:14:56.883 --- 10.0.0.1 ping statistics --- 00:14:56.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.883 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3804454 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3804454 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3804454 ']' 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.883 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:56.883 [2024-11-27 09:47:11.563066] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:14:56.883 [2024-11-27 09:47:11.563130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.883 [2024-11-27 09:47:11.662035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.883 [2024-11-27 09:47:11.714992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.883 [2024-11-27 09:47:11.715044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.883 [2024-11-27 09:47:11.715053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.883 [2024-11-27 09:47:11.715060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.883 [2024-11-27 09:47:11.715067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.883 [2024-11-27 09:47:11.717444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.883 [2024-11-27 09:47:11.717604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.883 [2024-11-27 09:47:11.717764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.883 [2024-11-27 09:47:11.717765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.143 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.144 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:57.144 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:57.144 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:57.144 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:57.144 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.144 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:57.144 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27256 00:14:57.144 [2024-11-27 09:47:12.598645] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:57.405 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:57.405 { 00:14:57.405 "nqn": "nqn.2016-06.io.spdk:cnode27256", 00:14:57.405 "tgt_name": "foobar", 00:14:57.405 "method": "nvmf_create_subsystem", 00:14:57.405 "req_id": 1 00:14:57.405 } 00:14:57.405 Got JSON-RPC error response 00:14:57.405 response: 00:14:57.405 { 00:14:57.405 "code": -32603, 00:14:57.405 "message": "Unable to find target foobar" 00:14:57.405 }' 00:14:57.405 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:57.405 { 00:14:57.405 "nqn": "nqn.2016-06.io.spdk:cnode27256", 00:14:57.405 "tgt_name": "foobar", 00:14:57.405 "method": "nvmf_create_subsystem", 00:14:57.405 "req_id": 1 00:14:57.405 } 00:14:57.405 Got JSON-RPC error response 00:14:57.405 response: 00:14:57.405 { 00:14:57.405 "code": -32603, 00:14:57.405 "message": "Unable to find target foobar" 00:14:57.405 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:57.405 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:57.405 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27324 00:14:57.405 [2024-11-27 09:47:12.807510] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27324: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:57.405 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:57.405 { 00:14:57.405 "nqn": "nqn.2016-06.io.spdk:cnode27324", 00:14:57.405 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:57.405 "method": "nvmf_create_subsystem", 00:14:57.405 "req_id": 1 00:14:57.405 } 00:14:57.405 Got JSON-RPC error response 00:14:57.405 response: 00:14:57.405 { 00:14:57.405 "code": -32602, 00:14:57.405 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:57.405 }' 00:14:57.405 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:57.405 { 00:14:57.405 "nqn": "nqn.2016-06.io.spdk:cnode27324", 00:14:57.405 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:57.405 "method": "nvmf_create_subsystem", 00:14:57.405 "req_id": 1 00:14:57.405 } 00:14:57.405 Got JSON-RPC error response 00:14:57.405 response: 00:14:57.405 { 00:14:57.405 "code": -32602, 00:14:57.405 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:57.405 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:57.405 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:57.405 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15795 00:14:57.666 [2024-11-27 09:47:13.016281] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15795: invalid model number 'SPDK_Controller' 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:57.666 { 00:14:57.666 "nqn": "nqn.2016-06.io.spdk:cnode15795", 00:14:57.666 "model_number": "SPDK_Controller\u001f", 00:14:57.666 "method": "nvmf_create_subsystem", 00:14:57.666 "req_id": 1 00:14:57.666 } 00:14:57.666 Got JSON-RPC error response 00:14:57.666 response: 00:14:57.666 { 00:14:57.666 "code": -32602, 00:14:57.666 "message": "Invalid MN SPDK_Controller\u001f" 00:14:57.666 }' 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:57.666 { 00:14:57.666 "nqn": "nqn.2016-06.io.spdk:cnode15795", 00:14:57.666 "model_number": "SPDK_Controller\u001f", 00:14:57.666 "method": "nvmf_create_subsystem", 00:14:57.666 "req_id": 1 00:14:57.666 } 00:14:57.666 Got JSON-RPC error response 00:14:57.666 response: 00:14:57.666 { 00:14:57.666 "code": -32602, 00:14:57.666 "message": "Invalid MN SPDK_Controller\u001f" 00:14:57.666 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:57.666 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ N == \- ]] 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'NS7U?ru#erN1[Di"w^+n&' 00:14:57.927 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'NS7U?ru#erN1[Di"w^+n&' nqn.2016-06.io.spdk:cnode21707 00:14:58.189 [2024-11-27 09:47:13.401769] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21707: invalid serial number 'NS7U?ru#erN1[Di"w^+n&' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:58.189 { 00:14:58.189 "nqn": "nqn.2016-06.io.spdk:cnode21707", 00:14:58.189 "serial_number": "NS7U?ru#erN1[Di\"w^+n&", 00:14:58.189 "method": "nvmf_create_subsystem", 00:14:58.189 "req_id": 1 00:14:58.189 } 00:14:58.189 Got JSON-RPC error response 00:14:58.189 response: 00:14:58.189 { 00:14:58.189 "code": -32602, 00:14:58.189 "message": "Invalid SN NS7U?ru#erN1[Di\"w^+n&" 00:14:58.189 }' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:58.189 { 00:14:58.189 "nqn": "nqn.2016-06.io.spdk:cnode21707", 00:14:58.189 "serial_number": "NS7U?ru#erN1[Di\"w^+n&", 00:14:58.189 "method": "nvmf_create_subsystem", 00:14:58.189 "req_id": 1 00:14:58.189 } 00:14:58.189 Got JSON-RPC error response 00:14:58.189 response: 00:14:58.189 { 00:14:58.189 "code": -32602, 00:14:58.189 "message": "Invalid SN NS7U?ru#erN1[Di\"w^+n&" 00:14:58.189 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.189 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:58.190 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:58.452 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:58.453 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:58.453 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.453 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.453 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:58.453 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:58.453 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:58.453 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:58.453 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:58.453 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:14:58.453 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '8+km J5+?OPO@#R7$1t6o`'\''Y6%?Oy_I_$u2$w#QMD' 00:14:58.453 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '8+km J5+?OPO@#R7$1t6o`'\''Y6%?Oy_I_$u2$w#QMD' nqn.2016-06.io.spdk:cnode19953 00:14:58.713 [2024-11-27 09:47:13.947815] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19953: invalid model number '8+km J5+?OPO@#R7$1t6o`'Y6%?Oy_I_$u2$w#QMD' 00:14:58.713 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:58.713 { 00:14:58.713 "nqn": "nqn.2016-06.io.spdk:cnode19953", 00:14:58.714 "model_number": "8+km J5+?OPO@#R7$1t6o`'\''Y6%?Oy_I_$u2$w#QMD", 00:14:58.714 "method": "nvmf_create_subsystem", 00:14:58.714 "req_id": 1 00:14:58.714 } 00:14:58.714 Got JSON-RPC error response 00:14:58.714 response: 00:14:58.714 { 00:14:58.714 "code": -32602, 00:14:58.714 "message": "Invalid MN 8+km J5+?OPO@#R7$1t6o`'\''Y6%?Oy_I_$u2$w#QMD" 00:14:58.714 }' 00:14:58.714 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:58.714 { 00:14:58.714 "nqn": "nqn.2016-06.io.spdk:cnode19953", 00:14:58.714 "model_number": "8+km J5+?OPO@#R7$1t6o`'Y6%?Oy_I_$u2$w#QMD", 00:14:58.714 "method": "nvmf_create_subsystem", 00:14:58.714 "req_id": 1 00:14:58.714 } 00:14:58.714 Got JSON-RPC error response 00:14:58.714 response: 00:14:58.714 { 00:14:58.714 "code": -32602, 00:14:58.714 "message": "Invalid MN 8+km J5+?OPO@#R7$1t6o`'Y6%?Oy_I_$u2$w#QMD" 00:14:58.714 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:58.714 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:58.714 [2024-11-27 09:47:14.148702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.974 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:58.974 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:58.974 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:58.974 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:58.974 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:58.974 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:59.235 [2024-11-27 09:47:14.566228] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:59.235 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:59.235 { 00:14:59.235 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:59.235 "listen_address": { 00:14:59.235 "trtype": "tcp", 00:14:59.235 "traddr": "", 00:14:59.235 "trsvcid": "4421" 00:14:59.235 }, 00:14:59.235 "method": "nvmf_subsystem_remove_listener", 00:14:59.235 "req_id": 1 00:14:59.235 } 00:14:59.235 Got JSON-RPC error response 00:14:59.235 response: 00:14:59.235 { 00:14:59.235 "code": -32602, 00:14:59.235 "message": "Invalid parameters" 00:14:59.235 }' 00:14:59.235 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:59.235 { 00:14:59.235 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:59.235 "listen_address": { 00:14:59.235 "trtype": "tcp", 00:14:59.235 "traddr": "", 00:14:59.235 "trsvcid": "4421" 00:14:59.235 }, 00:14:59.235 "method": "nvmf_subsystem_remove_listener", 00:14:59.235 "req_id": 1 00:14:59.235 } 00:14:59.235 Got JSON-RPC error response 00:14:59.235 response: 00:14:59.235 { 00:14:59.235 "code": -32602, 00:14:59.235 "message": "Invalid parameters" 00:14:59.235 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:59.235 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25871 -i 0 00:14:59.495 [2024-11-27 09:47:14.754809] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25871: invalid cntlid range [0-65519] 00:14:59.495 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:59.495 { 00:14:59.495 "nqn": "nqn.2016-06.io.spdk:cnode25871", 00:14:59.495 "min_cntlid": 0, 00:14:59.495 "method": "nvmf_create_subsystem", 00:14:59.495 "req_id": 1 00:14:59.495 } 00:14:59.495 Got JSON-RPC error response 00:14:59.495 response: 00:14:59.495 { 00:14:59.495 "code": -32602, 00:14:59.495 "message": "Invalid cntlid range [0-65519]" 00:14:59.495 }' 00:14:59.495 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:59.495 { 00:14:59.495 "nqn": "nqn.2016-06.io.spdk:cnode25871", 00:14:59.495 "min_cntlid": 0, 00:14:59.495 "method": "nvmf_create_subsystem", 00:14:59.495 "req_id": 1 00:14:59.495 } 00:14:59.495 Got JSON-RPC error response 00:14:59.495 response: 00:14:59.495 { 00:14:59.495 "code": -32602, 00:14:59.495 "message": "Invalid cntlid range [0-65519]" 00:14:59.495 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:59.495 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23021 -i 65520 00:14:59.495 [2024-11-27 09:47:14.935335] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23021: invalid cntlid range [65520-65519] 00:14:59.755 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:59.755 { 00:14:59.755 "nqn": "nqn.2016-06.io.spdk:cnode23021", 00:14:59.755 "min_cntlid": 65520, 00:14:59.755 "method": "nvmf_create_subsystem", 00:14:59.755 "req_id": 1 00:14:59.755 } 00:14:59.755 Got JSON-RPC error response 00:14:59.755 response: 00:14:59.755 { 00:14:59.755 "code": -32602, 00:14:59.755 "message": "Invalid cntlid range [65520-65519]" 00:14:59.755 }' 00:14:59.755 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:59.755 { 00:14:59.755 "nqn": "nqn.2016-06.io.spdk:cnode23021", 00:14:59.755 "min_cntlid": 65520, 00:14:59.755 "method": "nvmf_create_subsystem", 00:14:59.755 "req_id": 1 00:14:59.755 } 00:14:59.755 Got JSON-RPC error response 00:14:59.755 response: 00:14:59.755 { 00:14:59.755 "code": -32602, 00:14:59.755 "message": "Invalid cntlid range [65520-65519]" 00:14:59.755 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:59.755 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20775 -I 0 00:14:59.755 [2024-11-27 09:47:15.115895] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20775: invalid cntlid range [1-0] 00:14:59.755 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:59.755 { 00:14:59.755 "nqn": "nqn.2016-06.io.spdk:cnode20775", 00:14:59.755 "max_cntlid": 0, 00:14:59.755 "method": "nvmf_create_subsystem", 00:14:59.755 "req_id": 1 00:14:59.755 } 00:14:59.755 Got JSON-RPC error response 00:14:59.755 response: 00:14:59.755 { 00:14:59.755 "code": -32602, 00:14:59.755 "message": "Invalid cntlid range [1-0]" 00:14:59.755 }' 00:14:59.755 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:59.755 { 00:14:59.755 "nqn": "nqn.2016-06.io.spdk:cnode20775", 00:14:59.755 "max_cntlid": 0, 00:14:59.755 "method": "nvmf_create_subsystem", 00:14:59.755 "req_id": 1 00:14:59.755 } 00:14:59.755 Got JSON-RPC error response 00:14:59.755 response: 00:14:59.755 { 00:14:59.755 "code": -32602, 00:14:59.755 "message": "Invalid cntlid range [1-0]" 00:14:59.755 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:59.755 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9641 -I 65520 00:15:00.016 [2024-11-27 09:47:15.304467] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9641: invalid cntlid range [1-65520] 00:15:00.016 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:00.016 { 00:15:00.016 "nqn": "nqn.2016-06.io.spdk:cnode9641", 00:15:00.016 "max_cntlid": 65520, 00:15:00.016 "method": "nvmf_create_subsystem", 00:15:00.016 "req_id": 1 00:15:00.016 } 00:15:00.016 Got JSON-RPC error response 00:15:00.016 response: 00:15:00.016 { 00:15:00.016 "code": -32602, 00:15:00.016 "message": "Invalid cntlid range [1-65520]" 00:15:00.016 }' 00:15:00.016 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:00.016 { 00:15:00.016 "nqn": "nqn.2016-06.io.spdk:cnode9641", 00:15:00.016 "max_cntlid": 65520, 00:15:00.016 "method": "nvmf_create_subsystem", 00:15:00.016 "req_id": 1 00:15:00.016 } 00:15:00.016 Got JSON-RPC error response 00:15:00.016 response: 00:15:00.016 { 00:15:00.016 "code": -32602, 00:15:00.016 "message": "Invalid cntlid range [1-65520]" 00:15:00.016 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:00.016 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15968 -i 6 -I 5 00:15:00.276 [2024-11-27 09:47:15.493062] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15968: invalid cntlid range [6-5] 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:00.276 { 00:15:00.276 "nqn": "nqn.2016-06.io.spdk:cnode15968", 00:15:00.276 "min_cntlid": 6, 00:15:00.276 "max_cntlid": 5, 00:15:00.276 "method": "nvmf_create_subsystem", 00:15:00.276 "req_id": 1 00:15:00.276 } 00:15:00.276 Got JSON-RPC error response 00:15:00.276 response: 00:15:00.276 { 00:15:00.276 "code": -32602, 00:15:00.276 "message": "Invalid cntlid range [6-5]" 00:15:00.276 }' 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:00.276 { 00:15:00.276 "nqn": "nqn.2016-06.io.spdk:cnode15968", 00:15:00.276 "min_cntlid": 6, 00:15:00.276 "max_cntlid": 5, 00:15:00.276 "method": "nvmf_create_subsystem", 00:15:00.276 "req_id": 1 00:15:00.276 } 00:15:00.276 Got JSON-RPC error response 00:15:00.276 response: 00:15:00.276 { 00:15:00.276 "code": -32602, 00:15:00.276 "message": "Invalid cntlid range [6-5]" 00:15:00.276 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:00.276 { 00:15:00.276 "name": "foobar", 00:15:00.276 "method": "nvmf_delete_target", 00:15:00.276 "req_id": 1 00:15:00.276 } 00:15:00.276 Got JSON-RPC error response 00:15:00.276 response: 00:15:00.276 { 00:15:00.276 "code": -32602, 00:15:00.276 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:00.276 }' 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:00.276 { 00:15:00.276 "name": "foobar", 00:15:00.276 "method": "nvmf_delete_target", 00:15:00.276 "req_id": 1 00:15:00.276 } 00:15:00.276 Got JSON-RPC error response 00:15:00.276 response: 00:15:00.276 { 00:15:00.276 "code": -32602, 00:15:00.276 "message": "The specified target doesn't exist, cannot delete it." 00:15:00.276 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:00.276 rmmod nvme_tcp 00:15:00.276 rmmod nvme_fabrics 00:15:00.276 rmmod nvme_keyring 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3804454 ']' 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3804454 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3804454 ']' 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3804454 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.276 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3804454 00:15:00.537 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.537 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.537 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3804454' 00:15:00.537 killing process with pid 3804454 00:15:00.537 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3804454 00:15:00.537 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3804454 00:15:00.537 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:00.537 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:00.537 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:00.538 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:15:00.538 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:15:00.538 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:15:00.538 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:00.538 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:00.538 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:00.538 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.538 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.538 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.086 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:03.086 00:15:03.086 real 0m14.186s 00:15:03.086 user 0m21.227s 00:15:03.086 sys 0m6.788s 00:15:03.086 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.086 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:03.086 ************************************ 00:15:03.086 END TEST nvmf_invalid 00:15:03.086 ************************************ 00:15:03.086 09:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:03.086 09:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:03.086 09:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.086 09:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:03.086 ************************************ 00:15:03.086 START TEST nvmf_connect_stress 00:15:03.086 ************************************ 00:15:03.086 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:03.086 * Looking for test storage... 00:15:03.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:03.086 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:03.086 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:15:03.086 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:03.086 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:03.086 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:03.086 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:03.086 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:03.086 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:03.086 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:03.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.087 --rc genhtml_branch_coverage=1 00:15:03.087 --rc genhtml_function_coverage=1 00:15:03.087 --rc genhtml_legend=1 00:15:03.087 --rc geninfo_all_blocks=1 00:15:03.087 --rc geninfo_unexecuted_blocks=1 00:15:03.087 00:15:03.087 ' 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:03.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.087 --rc genhtml_branch_coverage=1 00:15:03.087 --rc genhtml_function_coverage=1 00:15:03.087 --rc genhtml_legend=1 00:15:03.087 --rc geninfo_all_blocks=1 00:15:03.087 --rc geninfo_unexecuted_blocks=1 00:15:03.087 00:15:03.087 ' 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:03.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.087 --rc genhtml_branch_coverage=1 00:15:03.087 --rc genhtml_function_coverage=1 00:15:03.087 --rc genhtml_legend=1 00:15:03.087 --rc geninfo_all_blocks=1 00:15:03.087 --rc geninfo_unexecuted_blocks=1 00:15:03.087 00:15:03.087 ' 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:03.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.087 --rc genhtml_branch_coverage=1 00:15:03.087 --rc genhtml_function_coverage=1 00:15:03.087 --rc genhtml_legend=1 00:15:03.087 --rc geninfo_all_blocks=1 00:15:03.087 --rc geninfo_unexecuted_blocks=1 00:15:03.087 00:15:03.087 ' 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:03.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:03.087 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:03.088 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:03.088 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.088 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:03.088 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.088 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:03.088 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:03.088 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:03.088 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.398 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:11.399 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:11.399 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:11.399 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:11.399 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:11.399 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:11.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:15:11.399 00:15:11.399 --- 10.0.0.2 ping statistics --- 00:15:11.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.400 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:15:11.400 00:15:11.400 --- 10.0.0.1 ping statistics --- 00:15:11.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.400 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3809697 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3809697 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3809697 ']' 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.400 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.400 [2024-11-27 09:47:25.913376] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:15:11.400 [2024-11-27 09:47:25.913448] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.400 [2024-11-27 09:47:26.011962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:11.400 [2024-11-27 09:47:26.063644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.400 [2024-11-27 09:47:26.063697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.400 [2024-11-27 09:47:26.063706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.400 [2024-11-27 09:47:26.063712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.400 [2024-11-27 09:47:26.063719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.400 [2024-11-27 09:47:26.065539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.400 [2024-11-27 09:47:26.065689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.400 [2024-11-27 09:47:26.065690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.400 [2024-11-27 09:47:26.785308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.400 [2024-11-27 09:47:26.810949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.400 NULL1 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3809978 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.400 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.401 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.401 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.401 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.661 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.921 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.921 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:11.921 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.921 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.921 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.182 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.182 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:12.182 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.182 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.182 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.756 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.756 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:12.756 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.756 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.756 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.017 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.017 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:13.017 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.017 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.017 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.279 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.279 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:13.279 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.279 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.279 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.539 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.539 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:13.539 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.539 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.539 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.800 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.800 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:13.800 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.800 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.800 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.370 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.371 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:14.371 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.371 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.371 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.632 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.632 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:14.632 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.632 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.632 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.893 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.893 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:14.893 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.893 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.893 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.155 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.155 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:15.155 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.155 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.155 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.416 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.416 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:15.416 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.416 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.416 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.986 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.986 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:15.986 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.986 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.986 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.247 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.248 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:16.248 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.248 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.248 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.508 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.508 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:16.508 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.508 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.508 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.769 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.769 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:16.769 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.769 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.769 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.030 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.030 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:17.030 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.030 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.030 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.602 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.602 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:17.602 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.602 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.602 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.864 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.864 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:17.864 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.864 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.864 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.126 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.126 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:18.126 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.126 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.126 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.386 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.386 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:18.386 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.386 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.386 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.647 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.647 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:18.647 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.647 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.647 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.233 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.233 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:19.233 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.233 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.233 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.497 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.497 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:19.497 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.497 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.497 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.757 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.757 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:19.757 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.757 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.757 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.018 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.018 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:20.018 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.018 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.018 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.278 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.278 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:20.278 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.278 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.278 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.850 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.850 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:20.850 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.850 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.850 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.111 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.111 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:21.111 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.111 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.111 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.372 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.372 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:21.372 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.372 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.372 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.632 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:21.632 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.632 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3809978 00:15:21.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3809978) - No such process 00:15:21.632 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3809978 00:15:21.632 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:21.632 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:21.632 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:21.632 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:21.632 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:21.632 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:21.632 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:21.632 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:21.632 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:21.632 rmmod nvme_tcp 00:15:21.632 rmmod nvme_fabrics 00:15:21.632 rmmod nvme_keyring 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3809697 ']' 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3809697 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3809697 ']' 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3809697 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3809697 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3809697' 00:15:21.892 killing process with pid 3809697 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3809697 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3809697 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.892 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.436 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:24.436 00:15:24.436 real 0m21.331s 00:15:24.436 user 0m42.045s 00:15:24.436 sys 0m9.486s 00:15:24.436 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.436 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.436 ************************************ 00:15:24.436 END TEST nvmf_connect_stress 00:15:24.436 ************************************ 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:24.437 ************************************ 00:15:24.437 START TEST nvmf_fused_ordering 00:15:24.437 ************************************ 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:24.437 * Looking for test storage... 00:15:24.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:24.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.437 --rc genhtml_branch_coverage=1 00:15:24.437 --rc genhtml_function_coverage=1 00:15:24.437 --rc genhtml_legend=1 00:15:24.437 --rc geninfo_all_blocks=1 00:15:24.437 --rc geninfo_unexecuted_blocks=1 00:15:24.437 00:15:24.437 ' 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:24.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.437 --rc genhtml_branch_coverage=1 00:15:24.437 --rc genhtml_function_coverage=1 00:15:24.437 --rc genhtml_legend=1 00:15:24.437 --rc geninfo_all_blocks=1 00:15:24.437 --rc geninfo_unexecuted_blocks=1 00:15:24.437 00:15:24.437 ' 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:24.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.437 --rc genhtml_branch_coverage=1 00:15:24.437 --rc genhtml_function_coverage=1 00:15:24.437 --rc genhtml_legend=1 00:15:24.437 --rc geninfo_all_blocks=1 00:15:24.437 --rc geninfo_unexecuted_blocks=1 00:15:24.437 00:15:24.437 ' 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:24.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.437 --rc genhtml_branch_coverage=1 00:15:24.437 --rc genhtml_function_coverage=1 00:15:24.437 --rc genhtml_legend=1 00:15:24.437 --rc geninfo_all_blocks=1 00:15:24.437 --rc geninfo_unexecuted_blocks=1 00:15:24.437 00:15:24.437 ' 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:24.437 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:24.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:24.438 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:32.581 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:32.581 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:32.582 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:32.582 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:32.582 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:32.582 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:32.582 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:32.582 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:32.582 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:32.582 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:32.582 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:32.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:15:32.582 00:15:32.582 --- 10.0.0.2 ping statistics --- 00:15:32.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.582 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:15:32.582 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:32.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:15:32.582 00:15:32.582 --- 10.0.0.1 ping statistics --- 00:15:32.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.582 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3816162 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3816162 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3816162 ']' 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.583 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:32.583 [2024-11-27 09:47:47.214302] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:15:32.583 [2024-11-27 09:47:47.214369] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.583 [2024-11-27 09:47:47.314718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.583 [2024-11-27 09:47:47.365474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.583 [2024-11-27 09:47:47.365525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.583 [2024-11-27 09:47:47.365534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.583 [2024-11-27 09:47:47.365541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.583 [2024-11-27 09:47:47.365548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.583 [2024-11-27 09:47:47.366320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.583 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.583 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:32.583 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:32.583 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:32.583 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:32.843 [2024-11-27 09:47:48.085267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:32.843 [2024-11-27 09:47:48.109538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:32.843 NULL1 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.843 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:32.843 [2024-11-27 09:47:48.179874] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:15:32.843 [2024-11-27 09:47:48.179937] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3816363 ] 00:15:33.416 Attached to nqn.2016-06.io.spdk:cnode1 00:15:33.416 Namespace ID: 1 size: 1GB 00:15:33.416 fused_ordering(0) 00:15:33.416 fused_ordering(1) 00:15:33.416 fused_ordering(2) 00:15:33.416 fused_ordering(3) 00:15:33.416 fused_ordering(4) 00:15:33.416 fused_ordering(5) 00:15:33.416 fused_ordering(6) 00:15:33.416 fused_ordering(7) 00:15:33.416 fused_ordering(8) 00:15:33.416 fused_ordering(9) 00:15:33.416 fused_ordering(10) 00:15:33.416 fused_ordering(11) 00:15:33.416 fused_ordering(12) 00:15:33.416 fused_ordering(13) 00:15:33.416 fused_ordering(14) 00:15:33.416 fused_ordering(15) 00:15:33.416 fused_ordering(16) 00:15:33.416 fused_ordering(17) 00:15:33.416 fused_ordering(18) 00:15:33.416 fused_ordering(19) 00:15:33.416 fused_ordering(20) 00:15:33.416 fused_ordering(21) 00:15:33.416 fused_ordering(22) 00:15:33.416 fused_ordering(23) 00:15:33.416 fused_ordering(24) 00:15:33.416 fused_ordering(25) 00:15:33.416 fused_ordering(26) 00:15:33.416 fused_ordering(27) 00:15:33.416 fused_ordering(28) 00:15:33.416 fused_ordering(29) 00:15:33.416 fused_ordering(30) 00:15:33.416 fused_ordering(31) 00:15:33.416 fused_ordering(32) 00:15:33.416 fused_ordering(33) 00:15:33.416 fused_ordering(34) 00:15:33.416 fused_ordering(35) 00:15:33.416 fused_ordering(36) 00:15:33.416 fused_ordering(37) 00:15:33.416 fused_ordering(38) 00:15:33.416 fused_ordering(39) 00:15:33.416 fused_ordering(40) 00:15:33.416 fused_ordering(41) 00:15:33.416 fused_ordering(42) 00:15:33.416 fused_ordering(43) 00:15:33.416 fused_ordering(44) 00:15:33.416 fused_ordering(45) 00:15:33.416 fused_ordering(46) 00:15:33.416 fused_ordering(47) 00:15:33.416 fused_ordering(48) 00:15:33.416 fused_ordering(49) 00:15:33.416 fused_ordering(50) 00:15:33.416 fused_ordering(51) 00:15:33.416 fused_ordering(52) 00:15:33.416 fused_ordering(53) 00:15:33.416 fused_ordering(54) 00:15:33.416 fused_ordering(55) 00:15:33.416 fused_ordering(56) 00:15:33.416 fused_ordering(57) 00:15:33.416 fused_ordering(58) 00:15:33.416 fused_ordering(59) 00:15:33.416 fused_ordering(60) 00:15:33.416 fused_ordering(61) 00:15:33.416 fused_ordering(62) 00:15:33.416 fused_ordering(63) 00:15:33.416 fused_ordering(64) 00:15:33.416 fused_ordering(65) 00:15:33.416 fused_ordering(66) 00:15:33.416 fused_ordering(67) 00:15:33.416 fused_ordering(68) 00:15:33.416 fused_ordering(69) 00:15:33.416 fused_ordering(70) 00:15:33.416 fused_ordering(71) 00:15:33.416 fused_ordering(72) 00:15:33.416 fused_ordering(73) 00:15:33.416 fused_ordering(74) 00:15:33.416 fused_ordering(75) 00:15:33.416 fused_ordering(76) 00:15:33.416 fused_ordering(77) 00:15:33.416 fused_ordering(78) 00:15:33.416 fused_ordering(79) 00:15:33.416 fused_ordering(80) 00:15:33.416 fused_ordering(81) 00:15:33.416 fused_ordering(82) 00:15:33.416 fused_ordering(83) 00:15:33.416 fused_ordering(84) 00:15:33.416 fused_ordering(85) 00:15:33.416 fused_ordering(86) 00:15:33.416 fused_ordering(87) 00:15:33.416 fused_ordering(88) 00:15:33.416 fused_ordering(89) 00:15:33.416 fused_ordering(90) 00:15:33.416 fused_ordering(91) 00:15:33.416 fused_ordering(92) 00:15:33.416 fused_ordering(93) 00:15:33.416 fused_ordering(94) 00:15:33.416 fused_ordering(95) 00:15:33.416 fused_ordering(96) 00:15:33.416 fused_ordering(97) 00:15:33.416 fused_ordering(98) 00:15:33.416 fused_ordering(99) 00:15:33.416 fused_ordering(100) 00:15:33.416 fused_ordering(101) 00:15:33.416 fused_ordering(102) 00:15:33.416 fused_ordering(103) 00:15:33.416 fused_ordering(104) 00:15:33.416 fused_ordering(105) 00:15:33.416 fused_ordering(106) 00:15:33.416 fused_ordering(107) 00:15:33.416 fused_ordering(108) 00:15:33.416 fused_ordering(109) 00:15:33.416 fused_ordering(110) 00:15:33.416 fused_ordering(111) 00:15:33.416 fused_ordering(112) 00:15:33.416 fused_ordering(113) 00:15:33.416 fused_ordering(114) 00:15:33.416 fused_ordering(115) 00:15:33.416 fused_ordering(116) 00:15:33.416 fused_ordering(117) 00:15:33.416 fused_ordering(118) 00:15:33.416 fused_ordering(119) 00:15:33.416 fused_ordering(120) 00:15:33.416 fused_ordering(121) 00:15:33.416 fused_ordering(122) 00:15:33.416 fused_ordering(123) 00:15:33.416 fused_ordering(124) 00:15:33.416 fused_ordering(125) 00:15:33.416 fused_ordering(126) 00:15:33.416 fused_ordering(127) 00:15:33.416 fused_ordering(128) 00:15:33.416 fused_ordering(129) 00:15:33.416 fused_ordering(130) 00:15:33.416 fused_ordering(131) 00:15:33.416 fused_ordering(132) 00:15:33.416 fused_ordering(133) 00:15:33.416 fused_ordering(134) 00:15:33.416 fused_ordering(135) 00:15:33.416 fused_ordering(136) 00:15:33.416 fused_ordering(137) 00:15:33.416 fused_ordering(138) 00:15:33.416 fused_ordering(139) 00:15:33.416 fused_ordering(140) 00:15:33.417 fused_ordering(141) 00:15:33.417 fused_ordering(142) 00:15:33.417 fused_ordering(143) 00:15:33.417 fused_ordering(144) 00:15:33.417 fused_ordering(145) 00:15:33.417 fused_ordering(146) 00:15:33.417 fused_ordering(147) 00:15:33.417 fused_ordering(148) 00:15:33.417 fused_ordering(149) 00:15:33.417 fused_ordering(150) 00:15:33.417 fused_ordering(151) 00:15:33.417 fused_ordering(152) 00:15:33.417 fused_ordering(153) 00:15:33.417 fused_ordering(154) 00:15:33.417 fused_ordering(155) 00:15:33.417 fused_ordering(156) 00:15:33.417 fused_ordering(157) 00:15:33.417 fused_ordering(158) 00:15:33.417 fused_ordering(159) 00:15:33.417 fused_ordering(160) 00:15:33.417 fused_ordering(161) 00:15:33.417 fused_ordering(162) 00:15:33.417 fused_ordering(163) 00:15:33.417 fused_ordering(164) 00:15:33.417 fused_ordering(165) 00:15:33.417 fused_ordering(166) 00:15:33.417 fused_ordering(167) 00:15:33.417 fused_ordering(168) 00:15:33.417 fused_ordering(169) 00:15:33.417 fused_ordering(170) 00:15:33.417 fused_ordering(171) 00:15:33.417 fused_ordering(172) 00:15:33.417 fused_ordering(173) 00:15:33.417 fused_ordering(174) 00:15:33.417 fused_ordering(175) 00:15:33.417 fused_ordering(176) 00:15:33.417 fused_ordering(177) 00:15:33.417 fused_ordering(178) 00:15:33.417 fused_ordering(179) 00:15:33.417 fused_ordering(180) 00:15:33.417 fused_ordering(181) 00:15:33.417 fused_ordering(182) 00:15:33.417 fused_ordering(183) 00:15:33.417 fused_ordering(184) 00:15:33.417 fused_ordering(185) 00:15:33.417 fused_ordering(186) 00:15:33.417 fused_ordering(187) 00:15:33.417 fused_ordering(188) 00:15:33.417 fused_ordering(189) 00:15:33.417 fused_ordering(190) 00:15:33.417 fused_ordering(191) 00:15:33.417 fused_ordering(192) 00:15:33.417 fused_ordering(193) 00:15:33.417 fused_ordering(194) 00:15:33.417 fused_ordering(195) 00:15:33.417 fused_ordering(196) 00:15:33.417 fused_ordering(197) 00:15:33.417 fused_ordering(198) 00:15:33.417 fused_ordering(199) 00:15:33.417 fused_ordering(200) 00:15:33.417 fused_ordering(201) 00:15:33.417 fused_ordering(202) 00:15:33.417 fused_ordering(203) 00:15:33.417 fused_ordering(204) 00:15:33.417 fused_ordering(205) 00:15:33.679 fused_ordering(206) 00:15:33.679 fused_ordering(207) 00:15:33.679 fused_ordering(208) 00:15:33.679 fused_ordering(209) 00:15:33.679 fused_ordering(210) 00:15:33.679 fused_ordering(211) 00:15:33.679 fused_ordering(212) 00:15:33.679 fused_ordering(213) 00:15:33.679 fused_ordering(214) 00:15:33.679 fused_ordering(215) 00:15:33.679 fused_ordering(216) 00:15:33.679 fused_ordering(217) 00:15:33.679 fused_ordering(218) 00:15:33.679 fused_ordering(219) 00:15:33.679 fused_ordering(220) 00:15:33.679 fused_ordering(221) 00:15:33.679 fused_ordering(222) 00:15:33.679 fused_ordering(223) 00:15:33.679 fused_ordering(224) 00:15:33.679 fused_ordering(225) 00:15:33.679 fused_ordering(226) 00:15:33.679 fused_ordering(227) 00:15:33.679 fused_ordering(228) 00:15:33.679 fused_ordering(229) 00:15:33.679 fused_ordering(230) 00:15:33.679 fused_ordering(231) 00:15:33.679 fused_ordering(232) 00:15:33.679 fused_ordering(233) 00:15:33.679 fused_ordering(234) 00:15:33.679 fused_ordering(235) 00:15:33.679 fused_ordering(236) 00:15:33.679 fused_ordering(237) 00:15:33.679 fused_ordering(238) 00:15:33.679 fused_ordering(239) 00:15:33.679 fused_ordering(240) 00:15:33.679 fused_ordering(241) 00:15:33.679 fused_ordering(242) 00:15:33.679 fused_ordering(243) 00:15:33.679 fused_ordering(244) 00:15:33.679 fused_ordering(245) 00:15:33.679 fused_ordering(246) 00:15:33.679 fused_ordering(247) 00:15:33.679 fused_ordering(248) 00:15:33.679 fused_ordering(249) 00:15:33.679 fused_ordering(250) 00:15:33.679 fused_ordering(251) 00:15:33.679 fused_ordering(252) 00:15:33.679 fused_ordering(253) 00:15:33.679 fused_ordering(254) 00:15:33.679 fused_ordering(255) 00:15:33.679 fused_ordering(256) 00:15:33.679 fused_ordering(257) 00:15:33.679 fused_ordering(258) 00:15:33.679 fused_ordering(259) 00:15:33.679 fused_ordering(260) 00:15:33.679 fused_ordering(261) 00:15:33.679 fused_ordering(262) 00:15:33.679 fused_ordering(263) 00:15:33.679 fused_ordering(264) 00:15:33.679 fused_ordering(265) 00:15:33.679 fused_ordering(266) 00:15:33.679 fused_ordering(267) 00:15:33.679 fused_ordering(268) 00:15:33.679 fused_ordering(269) 00:15:33.679 fused_ordering(270) 00:15:33.679 fused_ordering(271) 00:15:33.679 fused_ordering(272) 00:15:33.679 fused_ordering(273) 00:15:33.679 fused_ordering(274) 00:15:33.679 fused_ordering(275) 00:15:33.679 fused_ordering(276) 00:15:33.679 fused_ordering(277) 00:15:33.679 fused_ordering(278) 00:15:33.679 fused_ordering(279) 00:15:33.679 fused_ordering(280) 00:15:33.679 fused_ordering(281) 00:15:33.679 fused_ordering(282) 00:15:33.679 fused_ordering(283) 00:15:33.679 fused_ordering(284) 00:15:33.679 fused_ordering(285) 00:15:33.679 fused_ordering(286) 00:15:33.679 fused_ordering(287) 00:15:33.679 fused_ordering(288) 00:15:33.679 fused_ordering(289) 00:15:33.679 fused_ordering(290) 00:15:33.679 fused_ordering(291) 00:15:33.679 fused_ordering(292) 00:15:33.679 fused_ordering(293) 00:15:33.679 fused_ordering(294) 00:15:33.679 fused_ordering(295) 00:15:33.679 fused_ordering(296) 00:15:33.679 fused_ordering(297) 00:15:33.679 fused_ordering(298) 00:15:33.679 fused_ordering(299) 00:15:33.679 fused_ordering(300) 00:15:33.679 fused_ordering(301) 00:15:33.679 fused_ordering(302) 00:15:33.679 fused_ordering(303) 00:15:33.679 fused_ordering(304) 00:15:33.679 fused_ordering(305) 00:15:33.679 fused_ordering(306) 00:15:33.679 fused_ordering(307) 00:15:33.679 fused_ordering(308) 00:15:33.679 fused_ordering(309) 00:15:33.679 fused_ordering(310) 00:15:33.679 fused_ordering(311) 00:15:33.679 fused_ordering(312) 00:15:33.679 fused_ordering(313) 00:15:33.679 fused_ordering(314) 00:15:33.679 fused_ordering(315) 00:15:33.679 fused_ordering(316) 00:15:33.679 fused_ordering(317) 00:15:33.679 fused_ordering(318) 00:15:33.679 fused_ordering(319) 00:15:33.679 fused_ordering(320) 00:15:33.679 fused_ordering(321) 00:15:33.679 fused_ordering(322) 00:15:33.679 fused_ordering(323) 00:15:33.679 fused_ordering(324) 00:15:33.679 fused_ordering(325) 00:15:33.679 fused_ordering(326) 00:15:33.679 fused_ordering(327) 00:15:33.679 fused_ordering(328) 00:15:33.679 fused_ordering(329) 00:15:33.679 fused_ordering(330) 00:15:33.679 fused_ordering(331) 00:15:33.679 fused_ordering(332) 00:15:33.679 fused_ordering(333) 00:15:33.679 fused_ordering(334) 00:15:33.679 fused_ordering(335) 00:15:33.679 fused_ordering(336) 00:15:33.679 fused_ordering(337) 00:15:33.679 fused_ordering(338) 00:15:33.679 fused_ordering(339) 00:15:33.679 fused_ordering(340) 00:15:33.679 fused_ordering(341) 00:15:33.679 fused_ordering(342) 00:15:33.679 fused_ordering(343) 00:15:33.679 fused_ordering(344) 00:15:33.679 fused_ordering(345) 00:15:33.679 fused_ordering(346) 00:15:33.679 fused_ordering(347) 00:15:33.679 fused_ordering(348) 00:15:33.679 fused_ordering(349) 00:15:33.679 fused_ordering(350) 00:15:33.679 fused_ordering(351) 00:15:33.679 fused_ordering(352) 00:15:33.679 fused_ordering(353) 00:15:33.679 fused_ordering(354) 00:15:33.679 fused_ordering(355) 00:15:33.679 fused_ordering(356) 00:15:33.679 fused_ordering(357) 00:15:33.679 fused_ordering(358) 00:15:33.679 fused_ordering(359) 00:15:33.679 fused_ordering(360) 00:15:33.679 fused_ordering(361) 00:15:33.679 fused_ordering(362) 00:15:33.679 fused_ordering(363) 00:15:33.679 fused_ordering(364) 00:15:33.679 fused_ordering(365) 00:15:33.679 fused_ordering(366) 00:15:33.679 fused_ordering(367) 00:15:33.679 fused_ordering(368) 00:15:33.679 fused_ordering(369) 00:15:33.679 fused_ordering(370) 00:15:33.679 fused_ordering(371) 00:15:33.679 fused_ordering(372) 00:15:33.679 fused_ordering(373) 00:15:33.679 fused_ordering(374) 00:15:33.679 fused_ordering(375) 00:15:33.679 fused_ordering(376) 00:15:33.679 fused_ordering(377) 00:15:33.679 fused_ordering(378) 00:15:33.679 fused_ordering(379) 00:15:33.679 fused_ordering(380) 00:15:33.679 fused_ordering(381) 00:15:33.679 fused_ordering(382) 00:15:33.679 fused_ordering(383) 00:15:33.679 fused_ordering(384) 00:15:33.679 fused_ordering(385) 00:15:33.679 fused_ordering(386) 00:15:33.679 fused_ordering(387) 00:15:33.679 fused_ordering(388) 00:15:33.679 fused_ordering(389) 00:15:33.679 fused_ordering(390) 00:15:33.679 fused_ordering(391) 00:15:33.679 fused_ordering(392) 00:15:33.679 fused_ordering(393) 00:15:33.679 fused_ordering(394) 00:15:33.679 fused_ordering(395) 00:15:33.679 fused_ordering(396) 00:15:33.679 fused_ordering(397) 00:15:33.679 fused_ordering(398) 00:15:33.679 fused_ordering(399) 00:15:33.679 fused_ordering(400) 00:15:33.679 fused_ordering(401) 00:15:33.679 fused_ordering(402) 00:15:33.679 fused_ordering(403) 00:15:33.679 fused_ordering(404) 00:15:33.679 fused_ordering(405) 00:15:33.679 fused_ordering(406) 00:15:33.679 fused_ordering(407) 00:15:33.679 fused_ordering(408) 00:15:33.679 fused_ordering(409) 00:15:33.679 fused_ordering(410) 00:15:34.250 fused_ordering(411) 00:15:34.250 fused_ordering(412) 00:15:34.250 fused_ordering(413) 00:15:34.250 fused_ordering(414) 00:15:34.250 fused_ordering(415) 00:15:34.250 fused_ordering(416) 00:15:34.250 fused_ordering(417) 00:15:34.250 fused_ordering(418) 00:15:34.250 fused_ordering(419) 00:15:34.250 fused_ordering(420) 00:15:34.250 fused_ordering(421) 00:15:34.250 fused_ordering(422) 00:15:34.250 fused_ordering(423) 00:15:34.250 fused_ordering(424) 00:15:34.250 fused_ordering(425) 00:15:34.250 fused_ordering(426) 00:15:34.250 fused_ordering(427) 00:15:34.250 fused_ordering(428) 00:15:34.250 fused_ordering(429) 00:15:34.250 fused_ordering(430) 00:15:34.250 fused_ordering(431) 00:15:34.250 fused_ordering(432) 00:15:34.250 fused_ordering(433) 00:15:34.250 fused_ordering(434) 00:15:34.250 fused_ordering(435) 00:15:34.250 fused_ordering(436) 00:15:34.250 fused_ordering(437) 00:15:34.250 fused_ordering(438) 00:15:34.250 fused_ordering(439) 00:15:34.250 fused_ordering(440) 00:15:34.250 fused_ordering(441) 00:15:34.250 fused_ordering(442) 00:15:34.250 fused_ordering(443) 00:15:34.250 fused_ordering(444) 00:15:34.250 fused_ordering(445) 00:15:34.250 fused_ordering(446) 00:15:34.250 fused_ordering(447) 00:15:34.250 fused_ordering(448) 00:15:34.250 fused_ordering(449) 00:15:34.250 fused_ordering(450) 00:15:34.250 fused_ordering(451) 00:15:34.250 fused_ordering(452) 00:15:34.250 fused_ordering(453) 00:15:34.250 fused_ordering(454) 00:15:34.250 fused_ordering(455) 00:15:34.250 fused_ordering(456) 00:15:34.250 fused_ordering(457) 00:15:34.250 fused_ordering(458) 00:15:34.250 fused_ordering(459) 00:15:34.250 fused_ordering(460) 00:15:34.250 fused_ordering(461) 00:15:34.250 fused_ordering(462) 00:15:34.250 fused_ordering(463) 00:15:34.250 fused_ordering(464) 00:15:34.250 fused_ordering(465) 00:15:34.250 fused_ordering(466) 00:15:34.250 fused_ordering(467) 00:15:34.250 fused_ordering(468) 00:15:34.250 fused_ordering(469) 00:15:34.250 fused_ordering(470) 00:15:34.250 fused_ordering(471) 00:15:34.250 fused_ordering(472) 00:15:34.250 fused_ordering(473) 00:15:34.250 fused_ordering(474) 00:15:34.250 fused_ordering(475) 00:15:34.250 fused_ordering(476) 00:15:34.250 fused_ordering(477) 00:15:34.250 fused_ordering(478) 00:15:34.250 fused_ordering(479) 00:15:34.250 fused_ordering(480) 00:15:34.250 fused_ordering(481) 00:15:34.250 fused_ordering(482) 00:15:34.250 fused_ordering(483) 00:15:34.250 fused_ordering(484) 00:15:34.250 fused_ordering(485) 00:15:34.250 fused_ordering(486) 00:15:34.250 fused_ordering(487) 00:15:34.250 fused_ordering(488) 00:15:34.250 fused_ordering(489) 00:15:34.250 fused_ordering(490) 00:15:34.250 fused_ordering(491) 00:15:34.250 fused_ordering(492) 00:15:34.250 fused_ordering(493) 00:15:34.250 fused_ordering(494) 00:15:34.250 fused_ordering(495) 00:15:34.250 fused_ordering(496) 00:15:34.250 fused_ordering(497) 00:15:34.250 fused_ordering(498) 00:15:34.250 fused_ordering(499) 00:15:34.250 fused_ordering(500) 00:15:34.250 fused_ordering(501) 00:15:34.250 fused_ordering(502) 00:15:34.250 fused_ordering(503) 00:15:34.250 fused_ordering(504) 00:15:34.250 fused_ordering(505) 00:15:34.250 fused_ordering(506) 00:15:34.250 fused_ordering(507) 00:15:34.250 fused_ordering(508) 00:15:34.250 fused_ordering(509) 00:15:34.250 fused_ordering(510) 00:15:34.250 fused_ordering(511) 00:15:34.250 fused_ordering(512) 00:15:34.250 fused_ordering(513) 00:15:34.250 fused_ordering(514) 00:15:34.250 fused_ordering(515) 00:15:34.250 fused_ordering(516) 00:15:34.250 fused_ordering(517) 00:15:34.250 fused_ordering(518) 00:15:34.250 fused_ordering(519) 00:15:34.250 fused_ordering(520) 00:15:34.250 fused_ordering(521) 00:15:34.250 fused_ordering(522) 00:15:34.250 fused_ordering(523) 00:15:34.250 fused_ordering(524) 00:15:34.250 fused_ordering(525) 00:15:34.250 fused_ordering(526) 00:15:34.250 fused_ordering(527) 00:15:34.250 fused_ordering(528) 00:15:34.250 fused_ordering(529) 00:15:34.250 fused_ordering(530) 00:15:34.250 fused_ordering(531) 00:15:34.250 fused_ordering(532) 00:15:34.250 fused_ordering(533) 00:15:34.250 fused_ordering(534) 00:15:34.250 fused_ordering(535) 00:15:34.250 fused_ordering(536) 00:15:34.250 fused_ordering(537) 00:15:34.250 fused_ordering(538) 00:15:34.250 fused_ordering(539) 00:15:34.250 fused_ordering(540) 00:15:34.250 fused_ordering(541) 00:15:34.250 fused_ordering(542) 00:15:34.250 fused_ordering(543) 00:15:34.250 fused_ordering(544) 00:15:34.250 fused_ordering(545) 00:15:34.250 fused_ordering(546) 00:15:34.250 fused_ordering(547) 00:15:34.250 fused_ordering(548) 00:15:34.250 fused_ordering(549) 00:15:34.250 fused_ordering(550) 00:15:34.250 fused_ordering(551) 00:15:34.250 fused_ordering(552) 00:15:34.250 fused_ordering(553) 00:15:34.250 fused_ordering(554) 00:15:34.250 fused_ordering(555) 00:15:34.250 fused_ordering(556) 00:15:34.250 fused_ordering(557) 00:15:34.250 fused_ordering(558) 00:15:34.250 fused_ordering(559) 00:15:34.250 fused_ordering(560) 00:15:34.250 fused_ordering(561) 00:15:34.250 fused_ordering(562) 00:15:34.250 fused_ordering(563) 00:15:34.250 fused_ordering(564) 00:15:34.250 fused_ordering(565) 00:15:34.250 fused_ordering(566) 00:15:34.250 fused_ordering(567) 00:15:34.250 fused_ordering(568) 00:15:34.250 fused_ordering(569) 00:15:34.250 fused_ordering(570) 00:15:34.250 fused_ordering(571) 00:15:34.250 fused_ordering(572) 00:15:34.250 fused_ordering(573) 00:15:34.250 fused_ordering(574) 00:15:34.250 fused_ordering(575) 00:15:34.250 fused_ordering(576) 00:15:34.250 fused_ordering(577) 00:15:34.250 fused_ordering(578) 00:15:34.250 fused_ordering(579) 00:15:34.250 fused_ordering(580) 00:15:34.250 fused_ordering(581) 00:15:34.250 fused_ordering(582) 00:15:34.250 fused_ordering(583) 00:15:34.250 fused_ordering(584) 00:15:34.250 fused_ordering(585) 00:15:34.250 fused_ordering(586) 00:15:34.250 fused_ordering(587) 00:15:34.250 fused_ordering(588) 00:15:34.250 fused_ordering(589) 00:15:34.250 fused_ordering(590) 00:15:34.250 fused_ordering(591) 00:15:34.250 fused_ordering(592) 00:15:34.250 fused_ordering(593) 00:15:34.250 fused_ordering(594) 00:15:34.250 fused_ordering(595) 00:15:34.250 fused_ordering(596) 00:15:34.250 fused_ordering(597) 00:15:34.250 fused_ordering(598) 00:15:34.250 fused_ordering(599) 00:15:34.250 fused_ordering(600) 00:15:34.250 fused_ordering(601) 00:15:34.250 fused_ordering(602) 00:15:34.250 fused_ordering(603) 00:15:34.250 fused_ordering(604) 00:15:34.250 fused_ordering(605) 00:15:34.250 fused_ordering(606) 00:15:34.250 fused_ordering(607) 00:15:34.250 fused_ordering(608) 00:15:34.250 fused_ordering(609) 00:15:34.250 fused_ordering(610) 00:15:34.250 fused_ordering(611) 00:15:34.250 fused_ordering(612) 00:15:34.250 fused_ordering(613) 00:15:34.250 fused_ordering(614) 00:15:34.250 fused_ordering(615) 00:15:34.821 fused_ordering(616) 00:15:34.821 fused_ordering(617) 00:15:34.821 fused_ordering(618) 00:15:34.821 fused_ordering(619) 00:15:34.821 fused_ordering(620) 00:15:34.821 fused_ordering(621) 00:15:34.821 fused_ordering(622) 00:15:34.821 fused_ordering(623) 00:15:34.821 fused_ordering(624) 00:15:34.821 fused_ordering(625) 00:15:34.821 fused_ordering(626) 00:15:34.821 fused_ordering(627) 00:15:34.821 fused_ordering(628) 00:15:34.821 fused_ordering(629) 00:15:34.821 fused_ordering(630) 00:15:34.821 fused_ordering(631) 00:15:34.821 fused_ordering(632) 00:15:34.821 fused_ordering(633) 00:15:34.821 fused_ordering(634) 00:15:34.821 fused_ordering(635) 00:15:34.821 fused_ordering(636) 00:15:34.821 fused_ordering(637) 00:15:34.821 fused_ordering(638) 00:15:34.821 fused_ordering(639) 00:15:34.821 fused_ordering(640) 00:15:34.821 fused_ordering(641) 00:15:34.821 fused_ordering(642) 00:15:34.821 fused_ordering(643) 00:15:34.821 fused_ordering(644) 00:15:34.821 fused_ordering(645) 00:15:34.821 fused_ordering(646) 00:15:34.821 fused_ordering(647) 00:15:34.821 fused_ordering(648) 00:15:34.821 fused_ordering(649) 00:15:34.821 fused_ordering(650) 00:15:34.821 fused_ordering(651) 00:15:34.821 fused_ordering(652) 00:15:34.821 fused_ordering(653) 00:15:34.821 fused_ordering(654) 00:15:34.821 fused_ordering(655) 00:15:34.821 fused_ordering(656) 00:15:34.821 fused_ordering(657) 00:15:34.821 fused_ordering(658) 00:15:34.821 fused_ordering(659) 00:15:34.821 fused_ordering(660) 00:15:34.821 fused_ordering(661) 00:15:34.821 fused_ordering(662) 00:15:34.821 fused_ordering(663) 00:15:34.821 fused_ordering(664) 00:15:34.821 fused_ordering(665) 00:15:34.821 fused_ordering(666) 00:15:34.821 fused_ordering(667) 00:15:34.821 fused_ordering(668) 00:15:34.821 fused_ordering(669) 00:15:34.821 fused_ordering(670) 00:15:34.821 fused_ordering(671) 00:15:34.821 fused_ordering(672) 00:15:34.821 fused_ordering(673) 00:15:34.821 fused_ordering(674) 00:15:34.821 fused_ordering(675) 00:15:34.821 fused_ordering(676) 00:15:34.821 fused_ordering(677) 00:15:34.821 fused_ordering(678) 00:15:34.821 fused_ordering(679) 00:15:34.821 fused_ordering(680) 00:15:34.821 fused_ordering(681) 00:15:34.821 fused_ordering(682) 00:15:34.821 fused_ordering(683) 00:15:34.821 fused_ordering(684) 00:15:34.821 fused_ordering(685) 00:15:34.821 fused_ordering(686) 00:15:34.821 fused_ordering(687) 00:15:34.821 fused_ordering(688) 00:15:34.821 fused_ordering(689) 00:15:34.821 fused_ordering(690) 00:15:34.821 fused_ordering(691) 00:15:34.821 fused_ordering(692) 00:15:34.821 fused_ordering(693) 00:15:34.821 fused_ordering(694) 00:15:34.821 fused_ordering(695) 00:15:34.821 fused_ordering(696) 00:15:34.821 fused_ordering(697) 00:15:34.821 fused_ordering(698) 00:15:34.821 fused_ordering(699) 00:15:34.821 fused_ordering(700) 00:15:34.821 fused_ordering(701) 00:15:34.821 fused_ordering(702) 00:15:34.821 fused_ordering(703) 00:15:34.821 fused_ordering(704) 00:15:34.821 fused_ordering(705) 00:15:34.821 fused_ordering(706) 00:15:34.821 fused_ordering(707) 00:15:34.821 fused_ordering(708) 00:15:34.821 fused_ordering(709) 00:15:34.821 fused_ordering(710) 00:15:34.821 fused_ordering(711) 00:15:34.821 fused_ordering(712) 00:15:34.821 fused_ordering(713) 00:15:34.821 fused_ordering(714) 00:15:34.821 fused_ordering(715) 00:15:34.821 fused_ordering(716) 00:15:34.821 fused_ordering(717) 00:15:34.821 fused_ordering(718) 00:15:34.821 fused_ordering(719) 00:15:34.821 fused_ordering(720) 00:15:34.821 fused_ordering(721) 00:15:34.821 fused_ordering(722) 00:15:34.821 fused_ordering(723) 00:15:34.821 fused_ordering(724) 00:15:34.821 fused_ordering(725) 00:15:34.821 fused_ordering(726) 00:15:34.821 fused_ordering(727) 00:15:34.821 fused_ordering(728) 00:15:34.821 fused_ordering(729) 00:15:34.821 fused_ordering(730) 00:15:34.821 fused_ordering(731) 00:15:34.821 fused_ordering(732) 00:15:34.821 fused_ordering(733) 00:15:34.821 fused_ordering(734) 00:15:34.821 fused_ordering(735) 00:15:34.821 fused_ordering(736) 00:15:34.821 fused_ordering(737) 00:15:34.821 fused_ordering(738) 00:15:34.821 fused_ordering(739) 00:15:34.821 fused_ordering(740) 00:15:34.821 fused_ordering(741) 00:15:34.821 fused_ordering(742) 00:15:34.821 fused_ordering(743) 00:15:34.821 fused_ordering(744) 00:15:34.821 fused_ordering(745) 00:15:34.821 fused_ordering(746) 00:15:34.821 fused_ordering(747) 00:15:34.821 fused_ordering(748) 00:15:34.821 fused_ordering(749) 00:15:34.821 fused_ordering(750) 00:15:34.821 fused_ordering(751) 00:15:34.821 fused_ordering(752) 00:15:34.821 fused_ordering(753) 00:15:34.821 fused_ordering(754) 00:15:34.821 fused_ordering(755) 00:15:34.821 fused_ordering(756) 00:15:34.821 fused_ordering(757) 00:15:34.821 fused_ordering(758) 00:15:34.821 fused_ordering(759) 00:15:34.821 fused_ordering(760) 00:15:34.821 fused_ordering(761) 00:15:34.821 fused_ordering(762) 00:15:34.821 fused_ordering(763) 00:15:34.821 fused_ordering(764) 00:15:34.821 fused_ordering(765) 00:15:34.821 fused_ordering(766) 00:15:34.821 fused_ordering(767) 00:15:34.821 fused_ordering(768) 00:15:34.821 fused_ordering(769) 00:15:34.821 fused_ordering(770) 00:15:34.821 fused_ordering(771) 00:15:34.821 fused_ordering(772) 00:15:34.821 fused_ordering(773) 00:15:34.821 fused_ordering(774) 00:15:34.821 fused_ordering(775) 00:15:34.821 fused_ordering(776) 00:15:34.821 fused_ordering(777) 00:15:34.821 fused_ordering(778) 00:15:34.821 fused_ordering(779) 00:15:34.821 fused_ordering(780) 00:15:34.821 fused_ordering(781) 00:15:34.821 fused_ordering(782) 00:15:34.821 fused_ordering(783) 00:15:34.821 fused_ordering(784) 00:15:34.821 fused_ordering(785) 00:15:34.821 fused_ordering(786) 00:15:34.821 fused_ordering(787) 00:15:34.821 fused_ordering(788) 00:15:34.821 fused_ordering(789) 00:15:34.821 fused_ordering(790) 00:15:34.821 fused_ordering(791) 00:15:34.821 fused_ordering(792) 00:15:34.821 fused_ordering(793) 00:15:34.821 fused_ordering(794) 00:15:34.821 fused_ordering(795) 00:15:34.821 fused_ordering(796) 00:15:34.822 fused_ordering(797) 00:15:34.822 fused_ordering(798) 00:15:34.822 fused_ordering(799) 00:15:34.822 fused_ordering(800) 00:15:34.822 fused_ordering(801) 00:15:34.822 fused_ordering(802) 00:15:34.822 fused_ordering(803) 00:15:34.822 fused_ordering(804) 00:15:34.822 fused_ordering(805) 00:15:34.822 fused_ordering(806) 00:15:34.822 fused_ordering(807) 00:15:34.822 fused_ordering(808) 00:15:34.822 fused_ordering(809) 00:15:34.822 fused_ordering(810) 00:15:34.822 fused_ordering(811) 00:15:34.822 fused_ordering(812) 00:15:34.822 fused_ordering(813) 00:15:34.822 fused_ordering(814) 00:15:34.822 fused_ordering(815) 00:15:34.822 fused_ordering(816) 00:15:34.822 fused_ordering(817) 00:15:34.822 fused_ordering(818) 00:15:34.822 fused_ordering(819) 00:15:34.822 fused_ordering(820) 00:15:35.395 fused_ordering(821) 00:15:35.395 fused_ordering(822) 00:15:35.395 fused_ordering(823) 00:15:35.395 fused_ordering(824) 00:15:35.395 fused_ordering(825) 00:15:35.395 fused_ordering(826) 00:15:35.395 fused_ordering(827) 00:15:35.395 fused_ordering(828) 00:15:35.395 fused_ordering(829) 00:15:35.395 fused_ordering(830) 00:15:35.395 fused_ordering(831) 00:15:35.395 fused_ordering(832) 00:15:35.395 fused_ordering(833) 00:15:35.395 fused_ordering(834) 00:15:35.395 fused_ordering(835) 00:15:35.395 fused_ordering(836) 00:15:35.395 fused_ordering(837) 00:15:35.395 fused_ordering(838) 00:15:35.395 fused_ordering(839) 00:15:35.395 fused_ordering(840) 00:15:35.395 fused_ordering(841) 00:15:35.395 fused_ordering(842) 00:15:35.395 fused_ordering(843) 00:15:35.395 fused_ordering(844) 00:15:35.395 fused_ordering(845) 00:15:35.395 fused_ordering(846) 00:15:35.395 fused_ordering(847) 00:15:35.395 fused_ordering(848) 00:15:35.395 fused_ordering(849) 00:15:35.395 fused_ordering(850) 00:15:35.395 fused_ordering(851) 00:15:35.395 fused_ordering(852) 00:15:35.395 fused_ordering(853) 00:15:35.395 fused_ordering(854) 00:15:35.395 fused_ordering(855) 00:15:35.395 fused_ordering(856) 00:15:35.395 fused_ordering(857) 00:15:35.395 fused_ordering(858) 00:15:35.395 fused_ordering(859) 00:15:35.395 fused_ordering(860) 00:15:35.395 fused_ordering(861) 00:15:35.395 fused_ordering(862) 00:15:35.395 fused_ordering(863) 00:15:35.395 fused_ordering(864) 00:15:35.395 fused_ordering(865) 00:15:35.395 fused_ordering(866) 00:15:35.395 fused_ordering(867) 00:15:35.395 fused_ordering(868) 00:15:35.395 fused_ordering(869) 00:15:35.395 fused_ordering(870) 00:15:35.395 fused_ordering(871) 00:15:35.395 fused_ordering(872) 00:15:35.395 fused_ordering(873) 00:15:35.395 fused_ordering(874) 00:15:35.395 fused_ordering(875) 00:15:35.395 fused_ordering(876) 00:15:35.395 fused_ordering(877) 00:15:35.395 fused_ordering(878) 00:15:35.395 fused_ordering(879) 00:15:35.395 fused_ordering(880) 00:15:35.395 fused_ordering(881) 00:15:35.395 fused_ordering(882) 00:15:35.395 fused_ordering(883) 00:15:35.395 fused_ordering(884) 00:15:35.395 fused_ordering(885) 00:15:35.395 fused_ordering(886) 00:15:35.395 fused_ordering(887) 00:15:35.395 fused_ordering(888) 00:15:35.395 fused_ordering(889) 00:15:35.395 fused_ordering(890) 00:15:35.395 fused_ordering(891) 00:15:35.395 fused_ordering(892) 00:15:35.395 fused_ordering(893) 00:15:35.395 fused_ordering(894) 00:15:35.395 fused_ordering(895) 00:15:35.395 fused_ordering(896) 00:15:35.395 fused_ordering(897) 00:15:35.395 fused_ordering(898) 00:15:35.395 fused_ordering(899) 00:15:35.395 fused_ordering(900) 00:15:35.395 fused_ordering(901) 00:15:35.395 fused_ordering(902) 00:15:35.395 fused_ordering(903) 00:15:35.395 fused_ordering(904) 00:15:35.395 fused_ordering(905) 00:15:35.395 fused_ordering(906) 00:15:35.395 fused_ordering(907) 00:15:35.395 fused_ordering(908) 00:15:35.395 fused_ordering(909) 00:15:35.395 fused_ordering(910) 00:15:35.395 fused_ordering(911) 00:15:35.395 fused_ordering(912) 00:15:35.395 fused_ordering(913) 00:15:35.395 fused_ordering(914) 00:15:35.395 fused_ordering(915) 00:15:35.395 fused_ordering(916) 00:15:35.395 fused_ordering(917) 00:15:35.395 fused_ordering(918) 00:15:35.395 fused_ordering(919) 00:15:35.395 fused_ordering(920) 00:15:35.395 fused_ordering(921) 00:15:35.395 fused_ordering(922) 00:15:35.395 fused_ordering(923) 00:15:35.395 fused_ordering(924) 00:15:35.395 fused_ordering(925) 00:15:35.395 fused_ordering(926) 00:15:35.395 fused_ordering(927) 00:15:35.395 fused_ordering(928) 00:15:35.395 fused_ordering(929) 00:15:35.395 fused_ordering(930) 00:15:35.395 fused_ordering(931) 00:15:35.395 fused_ordering(932) 00:15:35.395 fused_ordering(933) 00:15:35.395 fused_ordering(934) 00:15:35.395 fused_ordering(935) 00:15:35.395 fused_ordering(936) 00:15:35.395 fused_ordering(937) 00:15:35.395 fused_ordering(938) 00:15:35.395 fused_ordering(939) 00:15:35.395 fused_ordering(940) 00:15:35.395 fused_ordering(941) 00:15:35.395 fused_ordering(942) 00:15:35.395 fused_ordering(943) 00:15:35.395 fused_ordering(944) 00:15:35.395 fused_ordering(945) 00:15:35.395 fused_ordering(946) 00:15:35.395 fused_ordering(947) 00:15:35.395 fused_ordering(948) 00:15:35.395 fused_ordering(949) 00:15:35.395 fused_ordering(950) 00:15:35.395 fused_ordering(951) 00:15:35.395 fused_ordering(952) 00:15:35.395 fused_ordering(953) 00:15:35.395 fused_ordering(954) 00:15:35.395 fused_ordering(955) 00:15:35.395 fused_ordering(956) 00:15:35.395 fused_ordering(957) 00:15:35.395 fused_ordering(958) 00:15:35.395 fused_ordering(959) 00:15:35.395 fused_ordering(960) 00:15:35.395 fused_ordering(961) 00:15:35.395 fused_ordering(962) 00:15:35.395 fused_ordering(963) 00:15:35.395 fused_ordering(964) 00:15:35.396 fused_ordering(965) 00:15:35.396 fused_ordering(966) 00:15:35.396 fused_ordering(967) 00:15:35.396 fused_ordering(968) 00:15:35.396 fused_ordering(969) 00:15:35.396 fused_ordering(970) 00:15:35.396 fused_ordering(971) 00:15:35.396 fused_ordering(972) 00:15:35.396 fused_ordering(973) 00:15:35.396 fused_ordering(974) 00:15:35.396 fused_ordering(975) 00:15:35.396 fused_ordering(976) 00:15:35.396 fused_ordering(977) 00:15:35.396 fused_ordering(978) 00:15:35.396 fused_ordering(979) 00:15:35.396 fused_ordering(980) 00:15:35.396 fused_ordering(981) 00:15:35.396 fused_ordering(982) 00:15:35.396 fused_ordering(983) 00:15:35.396 fused_ordering(984) 00:15:35.396 fused_ordering(985) 00:15:35.396 fused_ordering(986) 00:15:35.396 fused_ordering(987) 00:15:35.396 fused_ordering(988) 00:15:35.396 fused_ordering(989) 00:15:35.396 fused_ordering(990) 00:15:35.396 fused_ordering(991) 00:15:35.396 fused_ordering(992) 00:15:35.396 fused_ordering(993) 00:15:35.396 fused_ordering(994) 00:15:35.396 fused_ordering(995) 00:15:35.396 fused_ordering(996) 00:15:35.396 fused_ordering(997) 00:15:35.396 fused_ordering(998) 00:15:35.396 fused_ordering(999) 00:15:35.396 fused_ordering(1000) 00:15:35.396 fused_ordering(1001) 00:15:35.396 fused_ordering(1002) 00:15:35.396 fused_ordering(1003) 00:15:35.396 fused_ordering(1004) 00:15:35.396 fused_ordering(1005) 00:15:35.396 fused_ordering(1006) 00:15:35.396 fused_ordering(1007) 00:15:35.396 fused_ordering(1008) 00:15:35.396 fused_ordering(1009) 00:15:35.396 fused_ordering(1010) 00:15:35.396 fused_ordering(1011) 00:15:35.396 fused_ordering(1012) 00:15:35.396 fused_ordering(1013) 00:15:35.396 fused_ordering(1014) 00:15:35.396 fused_ordering(1015) 00:15:35.396 fused_ordering(1016) 00:15:35.396 fused_ordering(1017) 00:15:35.396 fused_ordering(1018) 00:15:35.396 fused_ordering(1019) 00:15:35.396 fused_ordering(1020) 00:15:35.396 fused_ordering(1021) 00:15:35.396 fused_ordering(1022) 00:15:35.396 fused_ordering(1023) 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:35.396 rmmod nvme_tcp 00:15:35.396 rmmod nvme_fabrics 00:15:35.396 rmmod nvme_keyring 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3816162 ']' 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3816162 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3816162 ']' 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3816162 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.396 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3816162 00:15:35.657 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:35.657 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:35.657 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3816162' 00:15:35.657 killing process with pid 3816162 00:15:35.657 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3816162 00:15:35.657 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3816162 00:15:35.657 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:35.657 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:35.657 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:35.657 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:35.657 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:35.657 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:35.657 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:35.657 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:35.657 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:35.657 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.657 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:35.657 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:38.206 00:15:38.206 real 0m13.667s 00:15:38.206 user 0m7.304s 00:15:38.206 sys 0m7.374s 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:38.206 ************************************ 00:15:38.206 END TEST nvmf_fused_ordering 00:15:38.206 ************************************ 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:38.206 ************************************ 00:15:38.206 START TEST nvmf_ns_masking 00:15:38.206 ************************************ 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:38.206 * Looking for test storage... 00:15:38.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:38.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.206 --rc genhtml_branch_coverage=1 00:15:38.206 --rc genhtml_function_coverage=1 00:15:38.206 --rc genhtml_legend=1 00:15:38.206 --rc geninfo_all_blocks=1 00:15:38.206 --rc geninfo_unexecuted_blocks=1 00:15:38.206 00:15:38.206 ' 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:38.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.206 --rc genhtml_branch_coverage=1 00:15:38.206 --rc genhtml_function_coverage=1 00:15:38.206 --rc genhtml_legend=1 00:15:38.206 --rc geninfo_all_blocks=1 00:15:38.206 --rc geninfo_unexecuted_blocks=1 00:15:38.206 00:15:38.206 ' 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:38.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.206 --rc genhtml_branch_coverage=1 00:15:38.206 --rc genhtml_function_coverage=1 00:15:38.206 --rc genhtml_legend=1 00:15:38.206 --rc geninfo_all_blocks=1 00:15:38.206 --rc geninfo_unexecuted_blocks=1 00:15:38.206 00:15:38.206 ' 00:15:38.206 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:38.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.207 --rc genhtml_branch_coverage=1 00:15:38.207 --rc genhtml_function_coverage=1 00:15:38.207 --rc genhtml_legend=1 00:15:38.207 --rc geninfo_all_blocks=1 00:15:38.207 --rc geninfo_unexecuted_blocks=1 00:15:38.207 00:15:38.207 ' 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:38.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=5ca9c418-3a00-45e9-a6d0-8c3005d67481 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2a914075-7f5c-4492-ae70-70d18712c62f 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a3e18068-c0d0-4531-8f4c-23779420b7d4 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:38.207 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:46.353 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:46.354 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:46.354 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:46.354 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:46.354 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:46.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:15:46.354 00:15:46.354 --- 10.0.0.2 ping statistics --- 00:15:46.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.354 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:46.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:15:46.354 00:15:46.354 --- 10.0.0.1 ping statistics --- 00:15:46.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.354 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:46.354 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:46.354 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:46.354 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:46.354 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:46.354 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:46.354 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3821094 00:15:46.354 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3821094 00:15:46.354 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:46.354 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3821094 ']' 00:15:46.354 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.354 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.354 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.354 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.354 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:46.354 [2024-11-27 09:48:01.096205] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:15:46.354 [2024-11-27 09:48:01.096275] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.354 [2024-11-27 09:48:01.200214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.354 [2024-11-27 09:48:01.252583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.355 [2024-11-27 09:48:01.252640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.355 [2024-11-27 09:48:01.252649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.355 [2024-11-27 09:48:01.252656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.355 [2024-11-27 09:48:01.252663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.355 [2024-11-27 09:48:01.253433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.616 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.616 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:46.616 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:46.616 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:46.616 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:46.616 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.616 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:46.877 [2024-11-27 09:48:02.136108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.877 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:46.877 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:46.877 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:47.137 Malloc1 00:15:47.137 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:47.137 Malloc2 00:15:47.137 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:47.399 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:47.660 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.920 [2024-11-27 09:48:03.149292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.920 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:47.921 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a3e18068-c0d0-4531-8f4c-23779420b7d4 -a 10.0.0.2 -s 4420 -i 4 00:15:48.181 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:48.181 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:48.181 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:48.181 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:48.181 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:50.093 [ 0]:0x1 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:50.093 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.353 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54583001ba4c4260be36a5661b7f9500 00:15:50.353 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54583001ba4c4260be36a5661b7f9500 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.353 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:50.353 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:50.353 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:50.353 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.353 [ 0]:0x1 00:15:50.353 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:50.353 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.613 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54583001ba4c4260be36a5661b7f9500 00:15:50.613 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54583001ba4c4260be36a5661b7f9500 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.613 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:50.613 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.613 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:50.613 [ 1]:0x2 00:15:50.613 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:50.613 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.613 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de6f23eaa3934ae09301d11a6dd48a4b 00:15:50.613 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de6f23eaa3934ae09301d11a6dd48a4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.613 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:50.613 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:50.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.613 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.873 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:50.873 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:50.873 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a3e18068-c0d0-4531-8f4c-23779420b7d4 -a 10.0.0.2 -s 4420 -i 4 00:15:51.134 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:51.134 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:51.134 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:51.134 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:51.134 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:51.134 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:53.675 [ 0]:0x2 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de6f23eaa3934ae09301d11a6dd48a4b 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de6f23eaa3934ae09301d11a6dd48a4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:53.675 [ 0]:0x1 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54583001ba4c4260be36a5661b7f9500 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54583001ba4c4260be36a5661b7f9500 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:53.675 [ 1]:0x2 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:53.675 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:53.675 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de6f23eaa3934ae09301d11a6dd48a4b 00:15:53.675 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de6f23eaa3934ae09301d11a6dd48a4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:53.675 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:53.936 [ 0]:0x2 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de6f23eaa3934ae09301d11a6dd48a4b 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de6f23eaa3934ae09301d11a6dd48a4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:53.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.936 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:54.196 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:54.196 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a3e18068-c0d0-4531-8f4c-23779420b7d4 -a 10.0.0.2 -s 4420 -i 4 00:15:54.457 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:54.457 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:54.457 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.457 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:54.457 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:54.457 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:56.368 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:56.368 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:56.368 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:56.368 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:56.368 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.368 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:56.368 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:56.368 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:56.630 [ 0]:0x1 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54583001ba4c4260be36a5661b7f9500 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54583001ba4c4260be36a5661b7f9500 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:56.630 [ 1]:0x2 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de6f23eaa3934ae09301d11a6dd48a4b 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de6f23eaa3934ae09301d11a6dd48a4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.630 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:56.891 [ 0]:0x2 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de6f23eaa3934ae09301d11a6dd48a4b 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de6f23eaa3934ae09301d11a6dd48a4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.891 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:56.892 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:56.892 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:56.892 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:56.892 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.892 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:56.892 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.892 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:56.892 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.892 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:56.892 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:56.892 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:57.152 [2024-11-27 09:48:12.414407] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:57.152 request: 00:15:57.152 { 00:15:57.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.152 "nsid": 2, 00:15:57.152 "host": "nqn.2016-06.io.spdk:host1", 00:15:57.152 "method": "nvmf_ns_remove_host", 00:15:57.152 "req_id": 1 00:15:57.152 } 00:15:57.152 Got JSON-RPC error response 00:15:57.152 response: 00:15:57.152 { 00:15:57.152 "code": -32602, 00:15:57.152 "message": "Invalid parameters" 00:15:57.152 } 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:57.152 [ 0]:0x2 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de6f23eaa3934ae09301d11a6dd48a4b 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de6f23eaa3934ae09301d11a6dd48a4b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:57.152 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:57.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.412 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3824103 00:15:57.412 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.412 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:57.412 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3824103 /var/tmp/host.sock 00:15:57.412 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3824103 ']' 00:15:57.412 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:57.412 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.412 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:57.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:57.412 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.412 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.412 [2024-11-27 09:48:12.676025] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:15:57.412 [2024-11-27 09:48:12.676078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824103 ] 00:15:57.412 [2024-11-27 09:48:12.764331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.412 [2024-11-27 09:48:12.799902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.353 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.353 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:58.353 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.353 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:58.614 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 5ca9c418-3a00-45e9-a6d0-8c3005d67481 00:15:58.614 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:58.614 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5CA9C4183A0045E9A6D08C3005D67481 -i 00:15:58.614 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2a914075-7f5c-4492-ae70-70d18712c62f 00:15:58.614 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:58.614 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2A9140757F5C4492AE7070D18712C62F -i 00:15:58.873 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:59.132 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:59.133 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:59.133 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:59.391 nvme0n1 00:15:59.651 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:59.651 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:59.913 nvme1n2 00:15:59.913 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:59.913 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:59.913 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:59.913 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:59.913 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:00.173 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:00.173 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:00.173 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:00.173 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:00.435 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 5ca9c418-3a00-45e9-a6d0-8c3005d67481 == \5\c\a\9\c\4\1\8\-\3\a\0\0\-\4\5\e\9\-\a\6\d\0\-\8\c\3\0\0\5\d\6\7\4\8\1 ]] 00:16:00.435 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:00.435 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:00.435 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:00.435 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2a914075-7f5c-4492-ae70-70d18712c62f == \2\a\9\1\4\0\7\5\-\7\f\5\c\-\4\4\9\2\-\a\e\7\0\-\7\0\d\1\8\7\1\2\c\6\2\f ]] 00:16:00.435 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.695 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 5ca9c418-3a00-45e9-a6d0-8c3005d67481 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5CA9C4183A0045E9A6D08C3005D67481 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5CA9C4183A0045E9A6D08C3005D67481 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5CA9C4183A0045E9A6D08C3005D67481 00:16:00.954 [2024-11-27 09:48:16.380957] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:16:00.954 [2024-11-27 09:48:16.380985] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:16:00.954 [2024-11-27 09:48:16.380992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.954 request: 00:16:00.954 { 00:16:00.954 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.954 "namespace": { 00:16:00.954 "bdev_name": "invalid", 00:16:00.954 "nsid": 1, 00:16:00.954 "nguid": "5CA9C4183A0045E9A6D08C3005D67481", 00:16:00.954 "no_auto_visible": false 00:16:00.954 }, 00:16:00.954 "method": "nvmf_subsystem_add_ns", 00:16:00.954 "req_id": 1 00:16:00.954 } 00:16:00.954 Got JSON-RPC error response 00:16:00.954 response: 00:16:00.954 { 00:16:00.954 "code": -32602, 00:16:00.954 "message": "Invalid parameters" 00:16:00.954 } 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 5ca9c418-3a00-45e9-a6d0-8c3005d67481 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:00.954 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5CA9C4183A0045E9A6D08C3005D67481 -i 00:16:01.213 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:16:03.371 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:16:03.371 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:16:03.373 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:03.373 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:16:03.373 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3824103 00:16:03.373 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3824103 ']' 00:16:03.373 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3824103 00:16:03.373 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:03.373 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.373 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3824103 00:16:03.373 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:03.373 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:03.373 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3824103' 00:16:03.373 killing process with pid 3824103 00:16:03.373 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3824103 00:16:03.373 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3824103 00:16:03.633 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:03.898 rmmod nvme_tcp 00:16:03.898 rmmod nvme_fabrics 00:16:03.898 rmmod nvme_keyring 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3821094 ']' 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3821094 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3821094 ']' 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3821094 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3821094 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3821094' 00:16:03.898 killing process with pid 3821094 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3821094 00:16:03.898 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3821094 00:16:04.159 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:04.159 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:04.159 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:04.159 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:04.159 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:16:04.159 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:04.159 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:16:04.159 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:04.159 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:04.159 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.159 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.159 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.071 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:06.071 00:16:06.071 real 0m28.326s 00:16:06.071 user 0m32.258s 00:16:06.071 sys 0m8.373s 00:16:06.071 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.071 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:06.071 ************************************ 00:16:06.071 END TEST nvmf_ns_masking 00:16:06.071 ************************************ 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:06.332 ************************************ 00:16:06.332 START TEST nvmf_nvme_cli 00:16:06.332 ************************************ 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:06.332 * Looking for test storage... 00:16:06.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.332 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:06.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.594 --rc genhtml_branch_coverage=1 00:16:06.594 --rc genhtml_function_coverage=1 00:16:06.594 --rc genhtml_legend=1 00:16:06.594 --rc geninfo_all_blocks=1 00:16:06.594 --rc geninfo_unexecuted_blocks=1 00:16:06.594 00:16:06.594 ' 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:06.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.594 --rc genhtml_branch_coverage=1 00:16:06.594 --rc genhtml_function_coverage=1 00:16:06.594 --rc genhtml_legend=1 00:16:06.594 --rc geninfo_all_blocks=1 00:16:06.594 --rc geninfo_unexecuted_blocks=1 00:16:06.594 00:16:06.594 ' 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:06.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.594 --rc genhtml_branch_coverage=1 00:16:06.594 --rc genhtml_function_coverage=1 00:16:06.594 --rc genhtml_legend=1 00:16:06.594 --rc geninfo_all_blocks=1 00:16:06.594 --rc geninfo_unexecuted_blocks=1 00:16:06.594 00:16:06.594 ' 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:06.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.594 --rc genhtml_branch_coverage=1 00:16:06.594 --rc genhtml_function_coverage=1 00:16:06.594 --rc genhtml_legend=1 00:16:06.594 --rc geninfo_all_blocks=1 00:16:06.594 --rc geninfo_unexecuted_blocks=1 00:16:06.594 00:16:06.594 ' 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.594 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:06.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:16:06.595 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.738 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:14.739 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:14.739 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:14.739 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:14.739 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.739 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:14.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:16:14.739 00:16:14.739 --- 10.0.0.2 ping statistics --- 00:16:14.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.739 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:16:14.739 00:16:14.739 --- 10.0.0.1 ping statistics --- 00:16:14.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.739 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3829572 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3829572 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3829572 ']' 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.739 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.740 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.740 [2024-11-27 09:48:29.351566] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:16:14.740 [2024-11-27 09:48:29.351633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.740 [2024-11-27 09:48:29.450998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.740 [2024-11-27 09:48:29.506415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.740 [2024-11-27 09:48:29.506470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.740 [2024-11-27 09:48:29.506479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.740 [2024-11-27 09:48:29.506486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.740 [2024-11-27 09:48:29.506492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.740 [2024-11-27 09:48:29.508906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.740 [2024-11-27 09:48:29.509067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.740 [2024-11-27 09:48:29.509275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.740 [2024-11-27 09:48:29.509275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.740 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.740 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:16:14.740 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:14.740 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:14.740 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:15.000 [2024-11-27 09:48:30.237076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:15.000 Malloc0 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:15.000 Malloc1 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:15.000 [2024-11-27 09:48:30.350051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.000 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:16:15.261 00:16:15.261 Discovery Log Number of Records 2, Generation counter 2 00:16:15.261 =====Discovery Log Entry 0====== 00:16:15.261 trtype: tcp 00:16:15.261 adrfam: ipv4 00:16:15.261 subtype: current discovery subsystem 00:16:15.261 treq: not required 00:16:15.261 portid: 0 00:16:15.261 trsvcid: 4420 00:16:15.261 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:15.261 traddr: 10.0.0.2 00:16:15.261 eflags: explicit discovery connections, duplicate discovery information 00:16:15.261 sectype: none 00:16:15.261 =====Discovery Log Entry 1====== 00:16:15.261 trtype: tcp 00:16:15.261 adrfam: ipv4 00:16:15.261 subtype: nvme subsystem 00:16:15.261 treq: not required 00:16:15.261 portid: 0 00:16:15.261 trsvcid: 4420 00:16:15.261 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:15.261 traddr: 10.0.0.2 00:16:15.261 eflags: none 00:16:15.261 sectype: none 00:16:15.261 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:15.261 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:15.261 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:15.261 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:15.261 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:15.261 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:15.261 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:15.261 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:15.261 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:15.261 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:15.261 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.172 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:17.172 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:16:17.172 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.172 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:17.172 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:17.172 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:19.083 /dev/nvme0n2 ]] 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:19.083 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:19.084 rmmod nvme_tcp 00:16:19.084 rmmod nvme_fabrics 00:16:19.084 rmmod nvme_keyring 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3829572 ']' 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3829572 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3829572 ']' 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3829572 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3829572 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3829572' 00:16:19.084 killing process with pid 3829572 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3829572 00:16:19.084 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3829572 00:16:19.345 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:19.345 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:19.345 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:19.345 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:19.345 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:19.345 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:19.345 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:19.346 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:19.346 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:19.346 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.346 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:19.346 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.256 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:21.256 00:16:21.256 real 0m15.093s 00:16:21.256 user 0m22.709s 00:16:21.256 sys 0m6.263s 00:16:21.256 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.256 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:21.256 ************************************ 00:16:21.256 END TEST nvmf_nvme_cli 00:16:21.256 ************************************ 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:21.518 ************************************ 00:16:21.518 START TEST nvmf_vfio_user 00:16:21.518 ************************************ 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:21.518 * Looking for test storage... 00:16:21.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.518 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:21.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.780 --rc genhtml_branch_coverage=1 00:16:21.780 --rc genhtml_function_coverage=1 00:16:21.780 --rc genhtml_legend=1 00:16:21.780 --rc geninfo_all_blocks=1 00:16:21.780 --rc geninfo_unexecuted_blocks=1 00:16:21.780 00:16:21.780 ' 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:21.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.780 --rc genhtml_branch_coverage=1 00:16:21.780 --rc genhtml_function_coverage=1 00:16:21.780 --rc genhtml_legend=1 00:16:21.780 --rc geninfo_all_blocks=1 00:16:21.780 --rc geninfo_unexecuted_blocks=1 00:16:21.780 00:16:21.780 ' 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:21.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.780 --rc genhtml_branch_coverage=1 00:16:21.780 --rc genhtml_function_coverage=1 00:16:21.780 --rc genhtml_legend=1 00:16:21.780 --rc geninfo_all_blocks=1 00:16:21.780 --rc geninfo_unexecuted_blocks=1 00:16:21.780 00:16:21.780 ' 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:21.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.780 --rc genhtml_branch_coverage=1 00:16:21.780 --rc genhtml_function_coverage=1 00:16:21.780 --rc genhtml_legend=1 00:16:21.780 --rc geninfo_all_blocks=1 00:16:21.780 --rc geninfo_unexecuted_blocks=1 00:16:21.780 00:16:21.780 ' 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.780 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:21.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:21.780 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3831313 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3831313' 00:16:21.781 Process pid: 3831313 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3831313 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3831313 ']' 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.781 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 [2024-11-27 09:48:37.093389] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:16:21.781 [2024-11-27 09:48:37.093462] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.781 [2024-11-27 09:48:37.179806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.781 [2024-11-27 09:48:37.215779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.781 [2024-11-27 09:48:37.215810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.781 [2024-11-27 09:48:37.215817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.781 [2024-11-27 09:48:37.215822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.781 [2024-11-27 09:48:37.215826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.781 [2024-11-27 09:48:37.217204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.781 [2024-11-27 09:48:37.217287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.781 [2024-11-27 09:48:37.217406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.781 [2024-11-27 09:48:37.217407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.754 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.754 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:22.754 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:23.703 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:23.703 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:23.703 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:23.703 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:23.703 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:23.703 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:23.964 Malloc1 00:16:23.964 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:24.225 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:24.225 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:24.486 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:24.486 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:24.486 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:24.747 Malloc2 00:16:24.747 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:24.747 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:25.008 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:25.270 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:25.270 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:25.270 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:25.270 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:25.270 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:25.270 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:25.270 [2024-11-27 09:48:40.608191] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:16:25.270 [2024-11-27 09:48:40.608237] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3832004 ] 00:16:25.270 [2024-11-27 09:48:40.654955] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:25.270 [2024-11-27 09:48:40.663445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:25.270 [2024-11-27 09:48:40.663464] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7efc53987000 00:16:25.270 [2024-11-27 09:48:40.664445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:25.270 [2024-11-27 09:48:40.665437] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:25.270 [2024-11-27 09:48:40.666451] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:25.270 [2024-11-27 09:48:40.667449] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:25.270 [2024-11-27 09:48:40.668450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:25.270 [2024-11-27 09:48:40.669459] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:25.270 [2024-11-27 09:48:40.670465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:25.270 [2024-11-27 09:48:40.671464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:25.270 [2024-11-27 09:48:40.672472] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:25.270 [2024-11-27 09:48:40.672481] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7efc5397c000 00:16:25.271 [2024-11-27 09:48:40.673395] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:25.271 [2024-11-27 09:48:40.682845] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:25.271 [2024-11-27 09:48:40.682877] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:25.271 [2024-11-27 09:48:40.687560] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:25.271 [2024-11-27 09:48:40.687597] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:25.271 [2024-11-27 09:48:40.687658] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:25.271 [2024-11-27 09:48:40.687669] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:25.271 [2024-11-27 09:48:40.687673] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:25.271 [2024-11-27 09:48:40.688557] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:25.271 [2024-11-27 09:48:40.688564] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:25.271 [2024-11-27 09:48:40.688569] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:25.271 [2024-11-27 09:48:40.689565] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:25.271 [2024-11-27 09:48:40.689570] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:25.271 [2024-11-27 09:48:40.689576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:25.271 [2024-11-27 09:48:40.690569] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:25.271 [2024-11-27 09:48:40.690575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:25.271 [2024-11-27 09:48:40.691577] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:25.271 [2024-11-27 09:48:40.691583] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:25.271 [2024-11-27 09:48:40.691587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:25.271 [2024-11-27 09:48:40.691592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:25.271 [2024-11-27 09:48:40.691697] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:25.271 [2024-11-27 09:48:40.691701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:25.271 [2024-11-27 09:48:40.691704] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:25.271 [2024-11-27 09:48:40.692581] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:25.271 [2024-11-27 09:48:40.693590] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:25.271 [2024-11-27 09:48:40.694598] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:25.271 [2024-11-27 09:48:40.695596] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:25.271 [2024-11-27 09:48:40.695651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:25.271 [2024-11-27 09:48:40.696606] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:25.271 [2024-11-27 09:48:40.696612] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:25.271 [2024-11-27 09:48:40.696615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:25.271 [2024-11-27 09:48:40.696630] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:25.271 [2024-11-27 09:48:40.696635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:25.271 [2024-11-27 09:48:40.696646] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:25.271 [2024-11-27 09:48:40.696649] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:25.271 [2024-11-27 09:48:40.696652] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:25.271 [2024-11-27 09:48:40.696662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:25.271 [2024-11-27 09:48:40.696705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:25.271 [2024-11-27 09:48:40.696712] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:25.271 [2024-11-27 09:48:40.696716] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:25.271 [2024-11-27 09:48:40.696719] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:25.271 [2024-11-27 09:48:40.696722] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:25.271 [2024-11-27 09:48:40.696727] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:25.271 [2024-11-27 09:48:40.696730] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:25.271 [2024-11-27 09:48:40.696734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:25.271 [2024-11-27 09:48:40.696741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:25.271 [2024-11-27 09:48:40.696748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:25.271 [2024-11-27 09:48:40.696756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:25.271 [2024-11-27 09:48:40.696764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.271 [2024-11-27 09:48:40.696770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.271 [2024-11-27 09:48:40.696776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.271 [2024-11-27 09:48:40.696782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.271 [2024-11-27 09:48:40.696788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:25.271 [2024-11-27 09:48:40.696793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:25.271 [2024-11-27 09:48:40.696799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:25.271 [2024-11-27 09:48:40.696808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:25.271 [2024-11-27 09:48:40.696814] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:25.271 [2024-11-27 09:48:40.696818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:25.271 [2024-11-27 09:48:40.696822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:25.271 [2024-11-27 09:48:40.696826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:25.271 [2024-11-27 09:48:40.696833] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:25.271 [2024-11-27 09:48:40.696842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:25.271 [2024-11-27 09:48:40.696886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:25.271 [2024-11-27 09:48:40.696891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:25.271 [2024-11-27 09:48:40.696896] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:25.271 [2024-11-27 09:48:40.696900] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:25.271 [2024-11-27 09:48:40.696902] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:25.271 [2024-11-27 09:48:40.696906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:25.271 [2024-11-27 09:48:40.696921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:25.271 [2024-11-27 09:48:40.696927] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:25.271 [2024-11-27 09:48:40.696936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:25.271 [2024-11-27 09:48:40.696941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:25.271 [2024-11-27 09:48:40.696946] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:25.271 [2024-11-27 09:48:40.696949] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:25.271 [2024-11-27 09:48:40.696952] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:25.271 [2024-11-27 09:48:40.696956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:25.272 [2024-11-27 09:48:40.696973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:25.272 [2024-11-27 09:48:40.696984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:25.272 [2024-11-27 09:48:40.696991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:25.272 [2024-11-27 09:48:40.696996] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:25.272 [2024-11-27 09:48:40.696999] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:25.272 [2024-11-27 09:48:40.697001] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:25.272 [2024-11-27 09:48:40.697006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:25.272 [2024-11-27 09:48:40.697014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:25.272 [2024-11-27 09:48:40.697020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:25.272 [2024-11-27 09:48:40.697024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:25.272 [2024-11-27 09:48:40.697030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:25.272 [2024-11-27 09:48:40.697034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:25.272 [2024-11-27 09:48:40.697038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:25.272 [2024-11-27 09:48:40.697042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:25.272 [2024-11-27 09:48:40.697045] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:25.272 [2024-11-27 09:48:40.697048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:25.272 [2024-11-27 09:48:40.697052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:25.272 [2024-11-27 09:48:40.697065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:25.272 [2024-11-27 09:48:40.697072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:25.272 [2024-11-27 09:48:40.697080] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:25.272 [2024-11-27 09:48:40.697087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:25.272 [2024-11-27 09:48:40.697095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:25.272 [2024-11-27 09:48:40.697107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:25.272 [2024-11-27 09:48:40.697115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:25.272 [2024-11-27 09:48:40.697123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:25.272 [2024-11-27 09:48:40.697134] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:25.272 [2024-11-27 09:48:40.697137] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:25.272 [2024-11-27 09:48:40.697141] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:25.272 [2024-11-27 09:48:40.697143] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:25.272 [2024-11-27 09:48:40.697145] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:25.272 [2024-11-27 09:48:40.697150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:25.272 [2024-11-27 09:48:40.697156] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:25.272 [2024-11-27 09:48:40.697163] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:25.272 [2024-11-27 09:48:40.697166] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:25.272 [2024-11-27 09:48:40.697170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:25.272 [2024-11-27 09:48:40.697175] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:25.272 [2024-11-27 09:48:40.697178] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:25.272 [2024-11-27 09:48:40.697181] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:25.272 [2024-11-27 09:48:40.697185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:25.272 [2024-11-27 09:48:40.697190] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:25.272 [2024-11-27 09:48:40.697193] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:25.272 [2024-11-27 09:48:40.697196] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:25.272 [2024-11-27 09:48:40.697200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:25.272 [2024-11-27 09:48:40.697205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:25.272 [2024-11-27 09:48:40.697213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:25.272 [2024-11-27 09:48:40.697221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:25.272 [2024-11-27 09:48:40.697226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:25.272 ===================================================== 00:16:25.272 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:25.272 ===================================================== 00:16:25.272 Controller Capabilities/Features 00:16:25.272 ================================ 00:16:25.272 Vendor ID: 4e58 00:16:25.272 Subsystem Vendor ID: 4e58 00:16:25.272 Serial Number: SPDK1 00:16:25.272 Model Number: SPDK bdev Controller 00:16:25.272 Firmware Version: 25.01 00:16:25.272 Recommended Arb Burst: 6 00:16:25.272 IEEE OUI Identifier: 8d 6b 50 00:16:25.272 Multi-path I/O 00:16:25.272 May have multiple subsystem ports: Yes 00:16:25.272 May have multiple controllers: Yes 00:16:25.272 Associated with SR-IOV VF: No 00:16:25.272 Max Data Transfer Size: 131072 00:16:25.272 Max Number of Namespaces: 32 00:16:25.272 Max Number of I/O Queues: 127 00:16:25.272 NVMe Specification Version (VS): 1.3 00:16:25.272 NVMe Specification Version (Identify): 1.3 00:16:25.272 Maximum Queue Entries: 256 00:16:25.272 Contiguous Queues Required: Yes 00:16:25.272 Arbitration Mechanisms Supported 00:16:25.272 Weighted Round Robin: Not Supported 00:16:25.272 Vendor Specific: Not Supported 00:16:25.272 Reset Timeout: 15000 ms 00:16:25.272 Doorbell Stride: 4 bytes 00:16:25.272 NVM Subsystem Reset: Not Supported 00:16:25.272 Command Sets Supported 00:16:25.272 NVM Command Set: Supported 00:16:25.272 Boot Partition: Not Supported 00:16:25.272 Memory Page Size Minimum: 4096 bytes 00:16:25.272 Memory Page Size Maximum: 4096 bytes 00:16:25.272 Persistent Memory Region: Not Supported 00:16:25.272 Optional Asynchronous Events Supported 00:16:25.272 Namespace Attribute Notices: Supported 00:16:25.272 Firmware Activation Notices: Not Supported 00:16:25.272 ANA Change Notices: Not Supported 00:16:25.272 PLE Aggregate Log Change Notices: Not Supported 00:16:25.272 LBA Status Info Alert Notices: Not Supported 00:16:25.272 EGE Aggregate Log Change Notices: Not Supported 00:16:25.272 Normal NVM Subsystem Shutdown event: Not Supported 00:16:25.272 Zone Descriptor Change Notices: Not Supported 00:16:25.272 Discovery Log Change Notices: Not Supported 00:16:25.272 Controller Attributes 00:16:25.272 128-bit Host Identifier: Supported 00:16:25.272 Non-Operational Permissive Mode: Not Supported 00:16:25.272 NVM Sets: Not Supported 00:16:25.272 Read Recovery Levels: Not Supported 00:16:25.272 Endurance Groups: Not Supported 00:16:25.272 Predictable Latency Mode: Not Supported 00:16:25.272 Traffic Based Keep ALive: Not Supported 00:16:25.272 Namespace Granularity: Not Supported 00:16:25.272 SQ Associations: Not Supported 00:16:25.272 UUID List: Not Supported 00:16:25.272 Multi-Domain Subsystem: Not Supported 00:16:25.272 Fixed Capacity Management: Not Supported 00:16:25.272 Variable Capacity Management: Not Supported 00:16:25.272 Delete Endurance Group: Not Supported 00:16:25.272 Delete NVM Set: Not Supported 00:16:25.272 Extended LBA Formats Supported: Not Supported 00:16:25.272 Flexible Data Placement Supported: Not Supported 00:16:25.272 00:16:25.272 Controller Memory Buffer Support 00:16:25.272 ================================ 00:16:25.272 Supported: No 00:16:25.272 00:16:25.272 Persistent Memory Region Support 00:16:25.272 ================================ 00:16:25.272 Supported: No 00:16:25.272 00:16:25.272 Admin Command Set Attributes 00:16:25.272 ============================ 00:16:25.272 Security Send/Receive: Not Supported 00:16:25.272 Format NVM: Not Supported 00:16:25.273 Firmware Activate/Download: Not Supported 00:16:25.273 Namespace Management: Not Supported 00:16:25.273 Device Self-Test: Not Supported 00:16:25.273 Directives: Not Supported 00:16:25.273 NVMe-MI: Not Supported 00:16:25.273 Virtualization Management: Not Supported 00:16:25.273 Doorbell Buffer Config: Not Supported 00:16:25.273 Get LBA Status Capability: Not Supported 00:16:25.273 Command & Feature Lockdown Capability: Not Supported 00:16:25.273 Abort Command Limit: 4 00:16:25.273 Async Event Request Limit: 4 00:16:25.273 Number of Firmware Slots: N/A 00:16:25.273 Firmware Slot 1 Read-Only: N/A 00:16:25.273 Firmware Activation Without Reset: N/A 00:16:25.273 Multiple Update Detection Support: N/A 00:16:25.273 Firmware Update Granularity: No Information Provided 00:16:25.273 Per-Namespace SMART Log: No 00:16:25.273 Asymmetric Namespace Access Log Page: Not Supported 00:16:25.273 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:25.273 Command Effects Log Page: Supported 00:16:25.273 Get Log Page Extended Data: Supported 00:16:25.273 Telemetry Log Pages: Not Supported 00:16:25.273 Persistent Event Log Pages: Not Supported 00:16:25.273 Supported Log Pages Log Page: May Support 00:16:25.273 Commands Supported & Effects Log Page: Not Supported 00:16:25.273 Feature Identifiers & Effects Log Page:May Support 00:16:25.273 NVMe-MI Commands & Effects Log Page: May Support 00:16:25.273 Data Area 4 for Telemetry Log: Not Supported 00:16:25.273 Error Log Page Entries Supported: 128 00:16:25.273 Keep Alive: Supported 00:16:25.273 Keep Alive Granularity: 10000 ms 00:16:25.273 00:16:25.273 NVM Command Set Attributes 00:16:25.273 ========================== 00:16:25.273 Submission Queue Entry Size 00:16:25.273 Max: 64 00:16:25.273 Min: 64 00:16:25.273 Completion Queue Entry Size 00:16:25.273 Max: 16 00:16:25.273 Min: 16 00:16:25.273 Number of Namespaces: 32 00:16:25.273 Compare Command: Supported 00:16:25.273 Write Uncorrectable Command: Not Supported 00:16:25.273 Dataset Management Command: Supported 00:16:25.273 Write Zeroes Command: Supported 00:16:25.273 Set Features Save Field: Not Supported 00:16:25.273 Reservations: Not Supported 00:16:25.273 Timestamp: Not Supported 00:16:25.273 Copy: Supported 00:16:25.273 Volatile Write Cache: Present 00:16:25.273 Atomic Write Unit (Normal): 1 00:16:25.273 Atomic Write Unit (PFail): 1 00:16:25.273 Atomic Compare & Write Unit: 1 00:16:25.273 Fused Compare & Write: Supported 00:16:25.273 Scatter-Gather List 00:16:25.273 SGL Command Set: Supported (Dword aligned) 00:16:25.273 SGL Keyed: Not Supported 00:16:25.273 SGL Bit Bucket Descriptor: Not Supported 00:16:25.273 SGL Metadata Pointer: Not Supported 00:16:25.273 Oversized SGL: Not Supported 00:16:25.273 SGL Metadata Address: Not Supported 00:16:25.273 SGL Offset: Not Supported 00:16:25.273 Transport SGL Data Block: Not Supported 00:16:25.273 Replay Protected Memory Block: Not Supported 00:16:25.273 00:16:25.273 Firmware Slot Information 00:16:25.273 ========================= 00:16:25.273 Active slot: 1 00:16:25.273 Slot 1 Firmware Revision: 25.01 00:16:25.273 00:16:25.273 00:16:25.273 Commands Supported and Effects 00:16:25.273 ============================== 00:16:25.273 Admin Commands 00:16:25.273 -------------- 00:16:25.273 Get Log Page (02h): Supported 00:16:25.273 Identify (06h): Supported 00:16:25.273 Abort (08h): Supported 00:16:25.273 Set Features (09h): Supported 00:16:25.273 Get Features (0Ah): Supported 00:16:25.273 Asynchronous Event Request (0Ch): Supported 00:16:25.273 Keep Alive (18h): Supported 00:16:25.273 I/O Commands 00:16:25.273 ------------ 00:16:25.273 Flush (00h): Supported LBA-Change 00:16:25.273 Write (01h): Supported LBA-Change 00:16:25.273 Read (02h): Supported 00:16:25.273 Compare (05h): Supported 00:16:25.273 Write Zeroes (08h): Supported LBA-Change 00:16:25.273 Dataset Management (09h): Supported LBA-Change 00:16:25.273 Copy (19h): Supported LBA-Change 00:16:25.273 00:16:25.273 Error Log 00:16:25.273 ========= 00:16:25.273 00:16:25.273 Arbitration 00:16:25.273 =========== 00:16:25.273 Arbitration Burst: 1 00:16:25.273 00:16:25.273 Power Management 00:16:25.273 ================ 00:16:25.273 Number of Power States: 1 00:16:25.273 Current Power State: Power State #0 00:16:25.273 Power State #0: 00:16:25.273 Max Power: 0.00 W 00:16:25.273 Non-Operational State: Operational 00:16:25.273 Entry Latency: Not Reported 00:16:25.273 Exit Latency: Not Reported 00:16:25.273 Relative Read Throughput: 0 00:16:25.273 Relative Read Latency: 0 00:16:25.273 Relative Write Throughput: 0 00:16:25.273 Relative Write Latency: 0 00:16:25.273 Idle Power: Not Reported 00:16:25.273 Active Power: Not Reported 00:16:25.273 Non-Operational Permissive Mode: Not Supported 00:16:25.273 00:16:25.273 Health Information 00:16:25.273 ================== 00:16:25.273 Critical Warnings: 00:16:25.273 Available Spare Space: OK 00:16:25.273 Temperature: OK 00:16:25.273 Device Reliability: OK 00:16:25.273 Read Only: No 00:16:25.273 Volatile Memory Backup: OK 00:16:25.273 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:25.273 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:25.273 Available Spare: 0% 00:16:25.273 Available Sp[2024-11-27 09:48:40.697299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:25.273 [2024-11-27 09:48:40.697310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:25.273 [2024-11-27 09:48:40.697329] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:25.273 [2024-11-27 09:48:40.697336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.273 [2024-11-27 09:48:40.697340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.273 [2024-11-27 09:48:40.697345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.273 [2024-11-27 09:48:40.697349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.273 [2024-11-27 09:48:40.697611] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:25.273 [2024-11-27 09:48:40.697620] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:25.273 [2024-11-27 09:48:40.698617] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:25.273 [2024-11-27 09:48:40.698657] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:25.273 [2024-11-27 09:48:40.698662] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:25.273 [2024-11-27 09:48:40.699629] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:25.273 [2024-11-27 09:48:40.699637] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:25.273 [2024-11-27 09:48:40.699688] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:25.273 [2024-11-27 09:48:40.702167] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:25.273 are Threshold: 0% 00:16:25.273 Life Percentage Used: 0% 00:16:25.273 Data Units Read: 0 00:16:25.273 Data Units Written: 0 00:16:25.273 Host Read Commands: 0 00:16:25.273 Host Write Commands: 0 00:16:25.273 Controller Busy Time: 0 minutes 00:16:25.273 Power Cycles: 0 00:16:25.273 Power On Hours: 0 hours 00:16:25.273 Unsafe Shutdowns: 0 00:16:25.273 Unrecoverable Media Errors: 0 00:16:25.273 Lifetime Error Log Entries: 0 00:16:25.273 Warning Temperature Time: 0 minutes 00:16:25.273 Critical Temperature Time: 0 minutes 00:16:25.273 00:16:25.273 Number of Queues 00:16:25.273 ================ 00:16:25.273 Number of I/O Submission Queues: 127 00:16:25.273 Number of I/O Completion Queues: 127 00:16:25.273 00:16:25.273 Active Namespaces 00:16:25.273 ================= 00:16:25.273 Namespace ID:1 00:16:25.273 Error Recovery Timeout: Unlimited 00:16:25.273 Command Set Identifier: NVM (00h) 00:16:25.273 Deallocate: Supported 00:16:25.273 Deallocated/Unwritten Error: Not Supported 00:16:25.273 Deallocated Read Value: Unknown 00:16:25.273 Deallocate in Write Zeroes: Not Supported 00:16:25.273 Deallocated Guard Field: 0xFFFF 00:16:25.273 Flush: Supported 00:16:25.273 Reservation: Supported 00:16:25.273 Namespace Sharing Capabilities: Multiple Controllers 00:16:25.273 Size (in LBAs): 131072 (0GiB) 00:16:25.273 Capacity (in LBAs): 131072 (0GiB) 00:16:25.273 Utilization (in LBAs): 131072 (0GiB) 00:16:25.273 NGUID: 3AEFDA9974BE4EEBA4A94869A0BA62A9 00:16:25.274 UUID: 3aefda99-74be-4eeb-a4a9-4869a0ba62a9 00:16:25.274 Thin Provisioning: Not Supported 00:16:25.274 Per-NS Atomic Units: Yes 00:16:25.274 Atomic Boundary Size (Normal): 0 00:16:25.274 Atomic Boundary Size (PFail): 0 00:16:25.274 Atomic Boundary Offset: 0 00:16:25.274 Maximum Single Source Range Length: 65535 00:16:25.274 Maximum Copy Length: 65535 00:16:25.274 Maximum Source Range Count: 1 00:16:25.274 NGUID/EUI64 Never Reused: No 00:16:25.274 Namespace Write Protected: No 00:16:25.274 Number of LBA Formats: 1 00:16:25.274 Current LBA Format: LBA Format #00 00:16:25.274 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:25.274 00:16:25.534 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:25.534 [2024-11-27 09:48:40.891860] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:30.820 Initializing NVMe Controllers 00:16:30.820 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:30.820 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:30.820 Initialization complete. Launching workers. 00:16:30.820 ======================================================== 00:16:30.820 Latency(us) 00:16:30.820 Device Information : IOPS MiB/s Average min max 00:16:30.820 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39969.85 156.13 3202.28 847.51 7780.68 00:16:30.820 ======================================================== 00:16:30.820 Total : 39969.85 156.13 3202.28 847.51 7780.68 00:16:30.820 00:16:30.820 [2024-11-27 09:48:45.913844] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:30.820 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:30.820 [2024-11-27 09:48:46.105702] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:36.108 Initializing NVMe Controllers 00:16:36.108 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:36.108 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:36.108 Initialization complete. Launching workers. 00:16:36.108 ======================================================== 00:16:36.109 Latency(us) 00:16:36.109 Device Information : IOPS MiB/s Average min max 00:16:36.109 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16053.16 62.71 7979.05 4987.14 10976.26 00:16:36.109 ======================================================== 00:16:36.109 Total : 16053.16 62.71 7979.05 4987.14 10976.26 00:16:36.109 00:16:36.109 [2024-11-27 09:48:51.146585] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:36.109 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:36.109 [2024-11-27 09:48:51.346450] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:41.393 [2024-11-27 09:48:56.417418] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:41.393 Initializing NVMe Controllers 00:16:41.393 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:41.393 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:41.393 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:41.393 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:41.393 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:41.393 Initialization complete. Launching workers. 00:16:41.393 Starting thread on core 2 00:16:41.393 Starting thread on core 3 00:16:41.393 Starting thread on core 1 00:16:41.393 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:41.393 [2024-11-27 09:48:56.666439] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:44.692 [2024-11-27 09:48:59.728379] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:44.692 Initializing NVMe Controllers 00:16:44.692 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:44.692 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:44.692 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:44.692 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:44.692 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:44.692 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:44.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:44.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:44.692 Initialization complete. Launching workers. 00:16:44.692 Starting thread on core 1 with urgent priority queue 00:16:44.692 Starting thread on core 2 with urgent priority queue 00:16:44.692 Starting thread on core 3 with urgent priority queue 00:16:44.692 Starting thread on core 0 with urgent priority queue 00:16:44.692 SPDK bdev Controller (SPDK1 ) core 0: 10023.33 IO/s 9.98 secs/100000 ios 00:16:44.692 SPDK bdev Controller (SPDK1 ) core 1: 13177.00 IO/s 7.59 secs/100000 ios 00:16:44.692 SPDK bdev Controller (SPDK1 ) core 2: 9165.00 IO/s 10.91 secs/100000 ios 00:16:44.692 SPDK bdev Controller (SPDK1 ) core 3: 11342.00 IO/s 8.82 secs/100000 ios 00:16:44.692 ======================================================== 00:16:44.692 00:16:44.692 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:44.692 [2024-11-27 09:48:59.964578] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:44.692 Initializing NVMe Controllers 00:16:44.692 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:44.692 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:44.692 Namespace ID: 1 size: 0GB 00:16:44.692 Initialization complete. 00:16:44.692 INFO: using host memory buffer for IO 00:16:44.692 Hello world! 00:16:44.692 [2024-11-27 09:48:59.998800] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:44.692 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:44.952 [2024-11-27 09:49:00.240860] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:45.894 Initializing NVMe Controllers 00:16:45.894 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:45.894 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:45.894 Initialization complete. Launching workers. 00:16:45.894 submit (in ns) avg, min, max = 6699.3, 2823.3, 4005604.2 00:16:45.894 complete (in ns) avg, min, max = 15065.7, 1633.3, 4004623.3 00:16:45.894 00:16:45.894 Submit histogram 00:16:45.894 ================ 00:16:45.894 Range in us Cumulative Count 00:16:45.894 2.813 - 2.827: 0.0835% ( 17) 00:16:45.894 2.827 - 2.840: 1.2574% ( 239) 00:16:45.894 2.840 - 2.853: 3.6100% ( 479) 00:16:45.894 2.853 - 2.867: 7.9568% ( 885) 00:16:45.894 2.867 - 2.880: 12.8880% ( 1004) 00:16:45.894 2.880 - 2.893: 19.0570% ( 1256) 00:16:45.894 2.893 - 2.907: 25.3045% ( 1272) 00:16:45.894 2.907 - 2.920: 31.1984% ( 1200) 00:16:45.894 2.920 - 2.933: 36.2574% ( 1030) 00:16:45.894 2.933 - 2.947: 40.6778% ( 900) 00:16:45.894 2.947 - 2.960: 44.7839% ( 836) 00:16:45.894 2.960 - 2.973: 51.1935% ( 1305) 00:16:45.894 2.973 - 2.987: 60.6680% ( 1929) 00:16:45.894 2.987 - 3.000: 70.0737% ( 1915) 00:16:45.894 3.000 - 3.013: 78.7230% ( 1761) 00:16:45.894 3.013 - 3.027: 85.0589% ( 1290) 00:16:45.894 3.027 - 3.040: 90.7417% ( 1157) 00:16:45.894 3.040 - 3.053: 94.6022% ( 786) 00:16:45.894 3.053 - 3.067: 97.1660% ( 522) 00:16:45.894 3.067 - 3.080: 98.2466% ( 220) 00:16:45.894 3.080 - 3.093: 98.8851% ( 130) 00:16:45.894 3.093 - 3.107: 99.1994% ( 64) 00:16:45.894 3.107 - 3.120: 99.2780% ( 16) 00:16:45.894 3.120 - 3.133: 99.3271% ( 10) 00:16:45.894 3.133 - 3.147: 99.3664% ( 8) 00:16:45.894 3.147 - 3.160: 99.3861% ( 4) 00:16:45.894 3.187 - 3.200: 99.3910% ( 1) 00:16:45.894 3.200 - 3.213: 99.3959% ( 1) 00:16:45.894 3.213 - 3.227: 99.4008% ( 1) 00:16:45.895 3.253 - 3.267: 99.4057% ( 1) 00:16:45.895 3.267 - 3.280: 99.4106% ( 1) 00:16:45.895 3.280 - 3.293: 99.4155% ( 1) 00:16:45.895 3.333 - 3.347: 99.4204% ( 1) 00:16:45.895 3.347 - 3.360: 99.4253% ( 1) 00:16:45.895 3.387 - 3.400: 99.4352% ( 2) 00:16:45.895 3.413 - 3.440: 99.4499% ( 3) 00:16:45.895 3.440 - 3.467: 99.4548% ( 1) 00:16:45.895 3.467 - 3.493: 99.4646% ( 2) 00:16:45.895 3.520 - 3.547: 99.4794% ( 3) 00:16:45.895 3.547 - 3.573: 99.4843% ( 1) 00:16:45.895 3.573 - 3.600: 99.4990% ( 3) 00:16:45.895 3.600 - 3.627: 99.5088% ( 2) 00:16:45.895 3.627 - 3.653: 99.5334% ( 5) 00:16:45.895 3.653 - 3.680: 99.5432% ( 2) 00:16:45.895 3.707 - 3.733: 99.5481% ( 1) 00:16:45.895 3.760 - 3.787: 99.5678% ( 4) 00:16:45.895 3.787 - 3.813: 99.5727% ( 1) 00:16:45.895 3.813 - 3.840: 99.5776% ( 1) 00:16:45.895 3.840 - 3.867: 99.5923% ( 3) 00:16:45.895 3.867 - 3.893: 99.6022% ( 2) 00:16:45.895 3.973 - 4.000: 99.6071% ( 1) 00:16:45.895 4.027 - 4.053: 99.6120% ( 1) 00:16:45.895 4.053 - 4.080: 99.6169% ( 1) 00:16:45.895 4.080 - 4.107: 99.6218% ( 1) 00:16:45.895 4.133 - 4.160: 99.6316% ( 2) 00:16:45.895 4.240 - 4.267: 99.6365% ( 1) 00:16:45.895 4.347 - 4.373: 99.6415% ( 1) 00:16:45.895 4.587 - 4.613: 99.6464% ( 1) 00:16:45.895 4.720 - 4.747: 99.6513% ( 1) 00:16:45.895 4.827 - 4.853: 99.6562% ( 1) 00:16:45.895 4.880 - 4.907: 99.6611% ( 1) 00:16:45.895 4.907 - 4.933: 99.6660% ( 1) 00:16:45.895 5.013 - 5.040: 99.6709% ( 1) 00:16:45.895 5.040 - 5.067: 99.6857% ( 3) 00:16:45.895 5.120 - 5.147: 99.6955% ( 2) 00:16:45.895 5.547 - 5.573: 99.7004% ( 1) 00:16:45.895 5.733 - 5.760: 99.7053% ( 1) 00:16:45.895 5.840 - 5.867: 99.7151% ( 2) 00:16:45.895 5.867 - 5.893: 99.7200% ( 1) 00:16:45.895 5.893 - 5.920: 99.7250% ( 1) 00:16:45.895 5.920 - 5.947: 99.7299% ( 1) 00:16:45.895 6.000 - 6.027: 99.7348% ( 1) 00:16:45.895 6.027 - 6.053: 99.7397% ( 1) 00:16:45.895 6.053 - 6.080: 99.7446% ( 1) 00:16:45.895 6.133 - 6.160: 99.7593% ( 3) 00:16:45.895 6.160 - 6.187: 99.7642% ( 1) 00:16:45.895 6.213 - 6.240: 99.7692% ( 1) 00:16:45.895 6.240 - 6.267: 99.7790% ( 2) 00:16:45.895 6.320 - 6.347: 99.7839% ( 1) 00:16:45.895 6.347 - 6.373: 99.7888% ( 1) 00:16:45.895 6.373 - 6.400: 99.7937% ( 1) 00:16:45.895 6.427 - 6.453: 99.7986% ( 1) 00:16:45.895 6.533 - 6.560: 99.8035% ( 1) 00:16:45.895 6.693 - 6.720: 99.8084% ( 1) 00:16:45.895 6.747 - 6.773: 99.8183% ( 2) 00:16:45.895 6.773 - 6.800: 99.8281% ( 2) 00:16:45.895 6.827 - 6.880: 99.8379% ( 2) 00:16:45.895 6.880 - 6.933: 99.8428% ( 1) 00:16:45.895 6.933 - 6.987: 99.8576% ( 3) 00:16:45.895 7.093 - 7.147: 99.8625% ( 1) 00:16:45.895 7.200 - 7.253: 99.8723% ( 2) 00:16:45.895 7.307 - 7.360: 99.8772% ( 1) 00:16:45.895 7.520 - 7.573: 99.8821% ( 1) 00:16:45.895 7.573 - 7.627: 99.8870% ( 1) 00:16:45.895 7.733 - 7.787: 99.8919% ( 1) 00:16:45.895 7.840 - 7.893: 99.8969% ( 1) 00:16:45.895 8.107 - 8.160: 99.9018% ( 1) 00:16:45.895 8.800 - 8.853: 99.9067% ( 1) 00:16:45.895 3986.773 - 4014.080: 100.0000% ( 19) 00:16:45.895 00:16:45.895 Complete histogram 00:16:45.895 ================== 00:16:45.895 Range in us Cumulative Count 00:16:45.895 1.633 - 1.640: 0.1916% ( 39) 00:16:45.895 1.640 - 1.647: 0.9283% ( 150) 00:16:45.895 1.647 - 1.653: 0.9971% ( 14) 00:16:45.895 1.653 - 1.660: 1.1395% ( 29) 00:16:45.895 1.660 - 1.667: 1.2672% ( 26) 00:16:45.895 1.667 - 1.673: 1.2967% ( 6) 00:16:45.895 1.673 - 1.680: 1.3016% ( 1) 00:16:45.895 1.680 - 1.687: 3.1532% ( 377) 00:16:45.895 1.687 - 1.693: 37.6130% ( 7016) 00:16:45.895 1.693 - 1.700: 50.4126% ( 2606) 00:16:45.895 1.700 - 1.707: 59.2240% ( 1794) 00:16:45.895 1.707 - 1.720: 75.6483% ( 3344) 00:16:45.895 1.720 - 1.733: 82.1857% ( 1331) 00:16:45.895 1.733 - 1.747: 83.5560% ( 279) 00:16:45.895 1.747 - 1.760: 88.4921% ( 1005) 00:16:45.895 1.760 - 1.773: 93.7819% ( 1077) 00:16:45.895 1.773 - 1.787: 96.9941% ( 654) 00:16:45.895 1.787 - 1.800: 98.6395% ( 335) 00:16:45.895 1.800 - 1.813: 99.1847% ( 111) 00:16:45.895 1.813 - 1.827: 99.2485% ( 13) 00:16:45.895 1.827 - 1.840: 99.2534% ( 1) 00:16:45.895 1.840 - 1.853: 99.2583% ( 1) 00:16:45.895 1.853 - 1.867: 99.2633% ( 1) 00:16:45.895 1.867 - 1.880: 99.2682% ( 1) 00:16:45.895 1.920 - 1.933: 99.2731% ( 1) 00:16:45.895 1.960 - 1.973: 99.2780% ( 1) 00:16:45.895 1.973 - 1.987: 99.2829% ( 1) 00:16:45.895 2.040 - 2.053: 99.2878% ( 1) 00:16:45.895 2.053 - 2.067: 99.2976% ( 2) 00:16:45.895 2.067 - 2.080: 99.3075% ( 2) 00:16:45.895 2.080 - 2.093: 99.3222% ( 3) 00:16:45.895 2.093 - 2.107: 99.3271% ( 1) 00:16:45.895 2.107 - 2.120: 99.3369% ( 2) 00:16:45.895 2.120 - 2.133: 99.3418% ( 1) 00:16:45.895 2.133 - 2.147: 99.3566% ( 3) 00:16:45.895 2.147 - 2.160: 99.3664% ( 2) 00:16:45.895 2.160 - 2.173: 99.3713% ( 1) 00:16:45.895 2.173 - 2.187: 99.3762% ( 1) 00:16:45.895 2.200 - 2.213: 99.3811% ( 1) 00:16:45.895 2.213 - 2.227: 99.3861% ( 1) 00:16:45.895 2.227 - 2.240: 99.3910% ( 1) 00:16:45.895 2.240 - 2.253: 99.4008% ( 2) 00:16:45.895 2.267 - 2.280: 99.4204% ( 4) 00:16:45.895 2.320 - 2.333: 99.4401% ( 4) 00:16:45.895 2.333 - 2.347: 99.4499% ( 2) 00:16:45.895 2.360 - 2.373: 99.4548% ( 1) 00:16:45.895 2.400 - 2.413: 99.4597% ( 1) 00:16:45.895 2.587 - 2.600: 99.4646% ( 1) 00:16:45.895 2.813 - 2.827: 99.4695% ( 1) 00:16:45.895 4.293 - 4.320: 99.4745% ( 1) 00:16:45.895 4.560 - 4.587: 99.4843% ( 2) 00:16:45.895 4.587 - 4.613: 99.4892% ( 1) 00:16:45.895 4.827 - 4.853: 99.4941% ( 1) 00:16:45.895 4.987 - 5.013: 99.4990% ( 1) 00:16:45.895 5.067 - 5.093: 99.5039% ( 1) 00:16:45.895 5.093 - 5.120: 99.5138% ( 2) 00:16:45.895 5.147 - 5.173: 99.5187% ( 1) 00:16:45.895 5.173 - 5.200: 99.5236% ( 1) 00:16:45.895 5.253 - 5.280: 99.5285% ( 1) 00:16:45.895 5.280 - 5.307: 99.5334% ( 1) 00:16:45.895 5.307 - 5.333: 99.5383% ( 1) 00:16:45.895 5.520 - 5.547: 99.5432% ( 1) 00:16:45.895 5.547 - 5.573: 99.5580% ( 3) 00:16:45.895 5.573 - 5.600: 99.5629% ( 1) 00:16:45.895 5.600 - 5.627: 99.5678% ( 1) 00:16:45.895 5.680 - 5.7[2024-11-27 09:49:01.258401] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:45.895 07: 99.5727% ( 1) 00:16:45.895 5.733 - 5.760: 99.5776% ( 1) 00:16:45.895 5.760 - 5.787: 99.5825% ( 1) 00:16:45.895 5.787 - 5.813: 99.5874% ( 1) 00:16:45.895 5.867 - 5.893: 99.5972% ( 2) 00:16:45.895 6.107 - 6.133: 99.6071% ( 2) 00:16:45.895 6.133 - 6.160: 99.6120% ( 1) 00:16:45.895 6.160 - 6.187: 99.6169% ( 1) 00:16:45.895 6.293 - 6.320: 99.6218% ( 1) 00:16:45.895 6.427 - 6.453: 99.6267% ( 1) 00:16:45.895 6.453 - 6.480: 99.6316% ( 1) 00:16:45.895 6.507 - 6.533: 99.6365% ( 1) 00:16:45.895 6.800 - 6.827: 99.6415% ( 1) 00:16:45.895 6.987 - 7.040: 99.6464% ( 1) 00:16:45.895 7.093 - 7.147: 99.6513% ( 1) 00:16:45.895 7.253 - 7.307: 99.6562% ( 1) 00:16:45.895 7.627 - 7.680: 99.6611% ( 1) 00:16:45.895 39.253 - 39.467: 99.6660% ( 1) 00:16:45.895 3986.773 - 4014.080: 100.0000% ( 68) 00:16:45.895 00:16:45.895 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:45.895 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:45.895 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:45.895 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:45.895 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:46.156 [ 00:16:46.156 { 00:16:46.156 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:46.156 "subtype": "Discovery", 00:16:46.156 "listen_addresses": [], 00:16:46.156 "allow_any_host": true, 00:16:46.156 "hosts": [] 00:16:46.156 }, 00:16:46.156 { 00:16:46.156 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:46.156 "subtype": "NVMe", 00:16:46.156 "listen_addresses": [ 00:16:46.156 { 00:16:46.156 "trtype": "VFIOUSER", 00:16:46.156 "adrfam": "IPv4", 00:16:46.156 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:46.156 "trsvcid": "0" 00:16:46.156 } 00:16:46.156 ], 00:16:46.156 "allow_any_host": true, 00:16:46.156 "hosts": [], 00:16:46.156 "serial_number": "SPDK1", 00:16:46.156 "model_number": "SPDK bdev Controller", 00:16:46.156 "max_namespaces": 32, 00:16:46.156 "min_cntlid": 1, 00:16:46.156 "max_cntlid": 65519, 00:16:46.156 "namespaces": [ 00:16:46.156 { 00:16:46.156 "nsid": 1, 00:16:46.156 "bdev_name": "Malloc1", 00:16:46.156 "name": "Malloc1", 00:16:46.156 "nguid": "3AEFDA9974BE4EEBA4A94869A0BA62A9", 00:16:46.156 "uuid": "3aefda99-74be-4eeb-a4a9-4869a0ba62a9" 00:16:46.156 } 00:16:46.156 ] 00:16:46.156 }, 00:16:46.156 { 00:16:46.156 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:46.156 "subtype": "NVMe", 00:16:46.156 "listen_addresses": [ 00:16:46.156 { 00:16:46.156 "trtype": "VFIOUSER", 00:16:46.156 "adrfam": "IPv4", 00:16:46.156 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:46.156 "trsvcid": "0" 00:16:46.156 } 00:16:46.156 ], 00:16:46.156 "allow_any_host": true, 00:16:46.156 "hosts": [], 00:16:46.156 "serial_number": "SPDK2", 00:16:46.156 "model_number": "SPDK bdev Controller", 00:16:46.156 "max_namespaces": 32, 00:16:46.156 "min_cntlid": 1, 00:16:46.156 "max_cntlid": 65519, 00:16:46.156 "namespaces": [ 00:16:46.156 { 00:16:46.156 "nsid": 1, 00:16:46.156 "bdev_name": "Malloc2", 00:16:46.156 "name": "Malloc2", 00:16:46.156 "nguid": "347D3354A759467895FCA1D06574E9E3", 00:16:46.156 "uuid": "347d3354-a759-4678-95fc-a1d06574e9e3" 00:16:46.156 } 00:16:46.156 ] 00:16:46.156 } 00:16:46.156 ] 00:16:46.157 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:46.157 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:46.157 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3836037 00:16:46.157 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:46.157 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:46.157 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:46.157 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:46.157 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:46.157 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:46.157 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:46.417 [2024-11-27 09:49:01.637542] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:46.417 Malloc3 00:16:46.417 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:46.417 [2024-11-27 09:49:01.831924] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:46.417 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:46.417 Asynchronous Event Request test 00:16:46.417 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:46.417 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:46.417 Registering asynchronous event callbacks... 00:16:46.417 Starting namespace attribute notice tests for all controllers... 00:16:46.417 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:46.417 aer_cb - Changed Namespace 00:16:46.417 Cleaning up... 00:16:46.678 [ 00:16:46.678 { 00:16:46.678 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:46.678 "subtype": "Discovery", 00:16:46.678 "listen_addresses": [], 00:16:46.678 "allow_any_host": true, 00:16:46.678 "hosts": [] 00:16:46.678 }, 00:16:46.678 { 00:16:46.678 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:46.678 "subtype": "NVMe", 00:16:46.678 "listen_addresses": [ 00:16:46.678 { 00:16:46.678 "trtype": "VFIOUSER", 00:16:46.678 "adrfam": "IPv4", 00:16:46.678 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:46.678 "trsvcid": "0" 00:16:46.678 } 00:16:46.678 ], 00:16:46.678 "allow_any_host": true, 00:16:46.678 "hosts": [], 00:16:46.678 "serial_number": "SPDK1", 00:16:46.678 "model_number": "SPDK bdev Controller", 00:16:46.678 "max_namespaces": 32, 00:16:46.678 "min_cntlid": 1, 00:16:46.678 "max_cntlid": 65519, 00:16:46.678 "namespaces": [ 00:16:46.678 { 00:16:46.678 "nsid": 1, 00:16:46.678 "bdev_name": "Malloc1", 00:16:46.678 "name": "Malloc1", 00:16:46.678 "nguid": "3AEFDA9974BE4EEBA4A94869A0BA62A9", 00:16:46.678 "uuid": "3aefda99-74be-4eeb-a4a9-4869a0ba62a9" 00:16:46.678 }, 00:16:46.678 { 00:16:46.678 "nsid": 2, 00:16:46.678 "bdev_name": "Malloc3", 00:16:46.678 "name": "Malloc3", 00:16:46.678 "nguid": "04B2A5C8B6BB4B50B851A084527486D6", 00:16:46.678 "uuid": "04b2a5c8-b6bb-4b50-b851-a084527486d6" 00:16:46.678 } 00:16:46.678 ] 00:16:46.678 }, 00:16:46.678 { 00:16:46.678 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:46.678 "subtype": "NVMe", 00:16:46.678 "listen_addresses": [ 00:16:46.678 { 00:16:46.678 "trtype": "VFIOUSER", 00:16:46.678 "adrfam": "IPv4", 00:16:46.678 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:46.678 "trsvcid": "0" 00:16:46.678 } 00:16:46.678 ], 00:16:46.678 "allow_any_host": true, 00:16:46.678 "hosts": [], 00:16:46.678 "serial_number": "SPDK2", 00:16:46.678 "model_number": "SPDK bdev Controller", 00:16:46.678 "max_namespaces": 32, 00:16:46.678 "min_cntlid": 1, 00:16:46.678 "max_cntlid": 65519, 00:16:46.678 "namespaces": [ 00:16:46.678 { 00:16:46.679 "nsid": 1, 00:16:46.679 "bdev_name": "Malloc2", 00:16:46.679 "name": "Malloc2", 00:16:46.679 "nguid": "347D3354A759467895FCA1D06574E9E3", 00:16:46.679 "uuid": "347d3354-a759-4678-95fc-a1d06574e9e3" 00:16:46.679 } 00:16:46.679 ] 00:16:46.679 } 00:16:46.679 ] 00:16:46.679 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3836037 00:16:46.679 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:46.679 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:46.679 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:46.679 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:46.679 [2024-11-27 09:49:02.062989] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:16:46.679 [2024-11-27 09:49:02.063034] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3836050 ] 00:16:46.679 [2024-11-27 09:49:02.102401] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:46.679 [2024-11-27 09:49:02.107597] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:46.679 [2024-11-27 09:49:02.107617] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe4727b8000 00:16:46.679 [2024-11-27 09:49:02.108596] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:46.679 [2024-11-27 09:49:02.109608] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:46.679 [2024-11-27 09:49:02.110613] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:46.679 [2024-11-27 09:49:02.111622] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:46.679 [2024-11-27 09:49:02.112633] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:46.679 [2024-11-27 09:49:02.113640] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:46.679 [2024-11-27 09:49:02.114645] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:46.679 [2024-11-27 09:49:02.115651] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:46.679 [2024-11-27 09:49:02.116655] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:46.679 [2024-11-27 09:49:02.116664] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe4727ad000 00:16:46.679 [2024-11-27 09:49:02.117581] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:46.679 [2024-11-27 09:49:02.126957] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:46.679 [2024-11-27 09:49:02.126977] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:46.679 [2024-11-27 09:49:02.132058] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:46.679 [2024-11-27 09:49:02.132093] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:46.679 [2024-11-27 09:49:02.132153] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:46.679 [2024-11-27 09:49:02.132167] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:46.679 [2024-11-27 09:49:02.132171] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:46.679 [2024-11-27 09:49:02.133057] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:46.679 [2024-11-27 09:49:02.133065] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:46.679 [2024-11-27 09:49:02.133070] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:46.679 [2024-11-27 09:49:02.134064] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:46.679 [2024-11-27 09:49:02.134073] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:46.679 [2024-11-27 09:49:02.134080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:46.679 [2024-11-27 09:49:02.135070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:46.679 [2024-11-27 09:49:02.135079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:46.679 [2024-11-27 09:49:02.136080] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:46.679 [2024-11-27 09:49:02.136088] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:46.679 [2024-11-27 09:49:02.136091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:46.679 [2024-11-27 09:49:02.136096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:46.679 [2024-11-27 09:49:02.136202] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:46.679 [2024-11-27 09:49:02.136206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:46.679 [2024-11-27 09:49:02.136211] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:46.679 [2024-11-27 09:49:02.137088] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:46.679 [2024-11-27 09:49:02.138094] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:46.679 [2024-11-27 09:49:02.139103] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:46.679 [2024-11-27 09:49:02.140101] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:46.679 [2024-11-27 09:49:02.140131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:46.679 [2024-11-27 09:49:02.141108] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:46.679 [2024-11-27 09:49:02.141114] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:46.679 [2024-11-27 09:49:02.141118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:46.679 [2024-11-27 09:49:02.141132] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:46.679 [2024-11-27 09:49:02.141138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:46.679 [2024-11-27 09:49:02.141146] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:46.679 [2024-11-27 09:49:02.141150] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:46.679 [2024-11-27 09:49:02.141153] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:46.679 [2024-11-27 09:49:02.141166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:46.941 [2024-11-27 09:49:02.149165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:46.941 [2024-11-27 09:49:02.149174] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:46.941 [2024-11-27 09:49:02.149178] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:46.941 [2024-11-27 09:49:02.149181] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:46.941 [2024-11-27 09:49:02.149185] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:46.941 [2024-11-27 09:49:02.149190] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:46.941 [2024-11-27 09:49:02.149193] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:46.941 [2024-11-27 09:49:02.149196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:46.941 [2024-11-27 09:49:02.149203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:46.941 [2024-11-27 09:49:02.149211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:46.941 [2024-11-27 09:49:02.157166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:46.941 [2024-11-27 09:49:02.157175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.941 [2024-11-27 09:49:02.157181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.941 [2024-11-27 09:49:02.157187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.942 [2024-11-27 09:49:02.157193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:46.942 [2024-11-27 09:49:02.157197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.157201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.157208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:46.942 [2024-11-27 09:49:02.165164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:46.942 [2024-11-27 09:49:02.165172] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:46.942 [2024-11-27 09:49:02.165176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.165181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.165185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.165191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:46.942 [2024-11-27 09:49:02.173173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:46.942 [2024-11-27 09:49:02.173220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.173226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.173232] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:46.942 [2024-11-27 09:49:02.173236] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:46.942 [2024-11-27 09:49:02.173238] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:46.942 [2024-11-27 09:49:02.173243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:46.942 [2024-11-27 09:49:02.181164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:46.942 [2024-11-27 09:49:02.181173] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:46.942 [2024-11-27 09:49:02.181182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.181187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.181196] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:46.942 [2024-11-27 09:49:02.181199] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:46.942 [2024-11-27 09:49:02.181201] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:46.942 [2024-11-27 09:49:02.181206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:46.942 [2024-11-27 09:49:02.189166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:46.942 [2024-11-27 09:49:02.189177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.189183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.189188] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:46.942 [2024-11-27 09:49:02.189191] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:46.942 [2024-11-27 09:49:02.189193] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:46.942 [2024-11-27 09:49:02.189198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:46.942 [2024-11-27 09:49:02.197166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:46.942 [2024-11-27 09:49:02.197173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.197179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.197185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.197189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.197193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.197197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.197200] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:46.942 [2024-11-27 09:49:02.197204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:46.942 [2024-11-27 09:49:02.197207] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:46.942 [2024-11-27 09:49:02.197219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:46.942 [2024-11-27 09:49:02.205166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:46.942 [2024-11-27 09:49:02.205176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:46.942 [2024-11-27 09:49:02.213166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:46.942 [2024-11-27 09:49:02.213178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:46.942 [2024-11-27 09:49:02.221164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:46.942 [2024-11-27 09:49:02.221174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:46.942 [2024-11-27 09:49:02.229166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:46.942 [2024-11-27 09:49:02.229179] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:46.942 [2024-11-27 09:49:02.229182] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:46.942 [2024-11-27 09:49:02.229185] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:46.942 [2024-11-27 09:49:02.229187] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:46.942 [2024-11-27 09:49:02.229189] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:46.942 [2024-11-27 09:49:02.229194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:46.942 [2024-11-27 09:49:02.229199] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:46.942 [2024-11-27 09:49:02.229202] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:46.942 [2024-11-27 09:49:02.229205] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:46.942 [2024-11-27 09:49:02.229209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:46.942 [2024-11-27 09:49:02.229214] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:46.942 [2024-11-27 09:49:02.229217] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:46.942 [2024-11-27 09:49:02.229219] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:46.942 [2024-11-27 09:49:02.229223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:46.942 [2024-11-27 09:49:02.229229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:46.942 [2024-11-27 09:49:02.229232] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:46.942 [2024-11-27 09:49:02.229234] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:46.942 [2024-11-27 09:49:02.229238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:46.942 [2024-11-27 09:49:02.237165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:46.942 [2024-11-27 09:49:02.237177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:46.942 [2024-11-27 09:49:02.237184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:46.942 [2024-11-27 09:49:02.237189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:46.942 ===================================================== 00:16:46.942 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:46.942 ===================================================== 00:16:46.942 Controller Capabilities/Features 00:16:46.942 ================================ 00:16:46.942 Vendor ID: 4e58 00:16:46.942 Subsystem Vendor ID: 4e58 00:16:46.942 Serial Number: SPDK2 00:16:46.942 Model Number: SPDK bdev Controller 00:16:46.942 Firmware Version: 25.01 00:16:46.942 Recommended Arb Burst: 6 00:16:46.942 IEEE OUI Identifier: 8d 6b 50 00:16:46.942 Multi-path I/O 00:16:46.942 May have multiple subsystem ports: Yes 00:16:46.942 May have multiple controllers: Yes 00:16:46.943 Associated with SR-IOV VF: No 00:16:46.943 Max Data Transfer Size: 131072 00:16:46.943 Max Number of Namespaces: 32 00:16:46.943 Max Number of I/O Queues: 127 00:16:46.943 NVMe Specification Version (VS): 1.3 00:16:46.943 NVMe Specification Version (Identify): 1.3 00:16:46.943 Maximum Queue Entries: 256 00:16:46.943 Contiguous Queues Required: Yes 00:16:46.943 Arbitration Mechanisms Supported 00:16:46.943 Weighted Round Robin: Not Supported 00:16:46.943 Vendor Specific: Not Supported 00:16:46.943 Reset Timeout: 15000 ms 00:16:46.943 Doorbell Stride: 4 bytes 00:16:46.943 NVM Subsystem Reset: Not Supported 00:16:46.943 Command Sets Supported 00:16:46.943 NVM Command Set: Supported 00:16:46.943 Boot Partition: Not Supported 00:16:46.943 Memory Page Size Minimum: 4096 bytes 00:16:46.943 Memory Page Size Maximum: 4096 bytes 00:16:46.943 Persistent Memory Region: Not Supported 00:16:46.943 Optional Asynchronous Events Supported 00:16:46.943 Namespace Attribute Notices: Supported 00:16:46.943 Firmware Activation Notices: Not Supported 00:16:46.943 ANA Change Notices: Not Supported 00:16:46.943 PLE Aggregate Log Change Notices: Not Supported 00:16:46.943 LBA Status Info Alert Notices: Not Supported 00:16:46.943 EGE Aggregate Log Change Notices: Not Supported 00:16:46.943 Normal NVM Subsystem Shutdown event: Not Supported 00:16:46.943 Zone Descriptor Change Notices: Not Supported 00:16:46.943 Discovery Log Change Notices: Not Supported 00:16:46.943 Controller Attributes 00:16:46.943 128-bit Host Identifier: Supported 00:16:46.943 Non-Operational Permissive Mode: Not Supported 00:16:46.943 NVM Sets: Not Supported 00:16:46.943 Read Recovery Levels: Not Supported 00:16:46.943 Endurance Groups: Not Supported 00:16:46.943 Predictable Latency Mode: Not Supported 00:16:46.943 Traffic Based Keep ALive: Not Supported 00:16:46.943 Namespace Granularity: Not Supported 00:16:46.943 SQ Associations: Not Supported 00:16:46.943 UUID List: Not Supported 00:16:46.943 Multi-Domain Subsystem: Not Supported 00:16:46.943 Fixed Capacity Management: Not Supported 00:16:46.943 Variable Capacity Management: Not Supported 00:16:46.943 Delete Endurance Group: Not Supported 00:16:46.943 Delete NVM Set: Not Supported 00:16:46.943 Extended LBA Formats Supported: Not Supported 00:16:46.943 Flexible Data Placement Supported: Not Supported 00:16:46.943 00:16:46.943 Controller Memory Buffer Support 00:16:46.943 ================================ 00:16:46.943 Supported: No 00:16:46.943 00:16:46.943 Persistent Memory Region Support 00:16:46.943 ================================ 00:16:46.943 Supported: No 00:16:46.943 00:16:46.943 Admin Command Set Attributes 00:16:46.943 ============================ 00:16:46.943 Security Send/Receive: Not Supported 00:16:46.943 Format NVM: Not Supported 00:16:46.943 Firmware Activate/Download: Not Supported 00:16:46.943 Namespace Management: Not Supported 00:16:46.943 Device Self-Test: Not Supported 00:16:46.943 Directives: Not Supported 00:16:46.943 NVMe-MI: Not Supported 00:16:46.943 Virtualization Management: Not Supported 00:16:46.943 Doorbell Buffer Config: Not Supported 00:16:46.943 Get LBA Status Capability: Not Supported 00:16:46.943 Command & Feature Lockdown Capability: Not Supported 00:16:46.943 Abort Command Limit: 4 00:16:46.943 Async Event Request Limit: 4 00:16:46.943 Number of Firmware Slots: N/A 00:16:46.943 Firmware Slot 1 Read-Only: N/A 00:16:46.943 Firmware Activation Without Reset: N/A 00:16:46.943 Multiple Update Detection Support: N/A 00:16:46.943 Firmware Update Granularity: No Information Provided 00:16:46.943 Per-Namespace SMART Log: No 00:16:46.943 Asymmetric Namespace Access Log Page: Not Supported 00:16:46.943 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:46.943 Command Effects Log Page: Supported 00:16:46.943 Get Log Page Extended Data: Supported 00:16:46.943 Telemetry Log Pages: Not Supported 00:16:46.943 Persistent Event Log Pages: Not Supported 00:16:46.943 Supported Log Pages Log Page: May Support 00:16:46.943 Commands Supported & Effects Log Page: Not Supported 00:16:46.943 Feature Identifiers & Effects Log Page:May Support 00:16:46.943 NVMe-MI Commands & Effects Log Page: May Support 00:16:46.943 Data Area 4 for Telemetry Log: Not Supported 00:16:46.943 Error Log Page Entries Supported: 128 00:16:46.943 Keep Alive: Supported 00:16:46.943 Keep Alive Granularity: 10000 ms 00:16:46.943 00:16:46.943 NVM Command Set Attributes 00:16:46.943 ========================== 00:16:46.943 Submission Queue Entry Size 00:16:46.943 Max: 64 00:16:46.943 Min: 64 00:16:46.943 Completion Queue Entry Size 00:16:46.943 Max: 16 00:16:46.943 Min: 16 00:16:46.943 Number of Namespaces: 32 00:16:46.943 Compare Command: Supported 00:16:46.943 Write Uncorrectable Command: Not Supported 00:16:46.943 Dataset Management Command: Supported 00:16:46.943 Write Zeroes Command: Supported 00:16:46.943 Set Features Save Field: Not Supported 00:16:46.943 Reservations: Not Supported 00:16:46.943 Timestamp: Not Supported 00:16:46.943 Copy: Supported 00:16:46.943 Volatile Write Cache: Present 00:16:46.943 Atomic Write Unit (Normal): 1 00:16:46.943 Atomic Write Unit (PFail): 1 00:16:46.943 Atomic Compare & Write Unit: 1 00:16:46.943 Fused Compare & Write: Supported 00:16:46.943 Scatter-Gather List 00:16:46.943 SGL Command Set: Supported (Dword aligned) 00:16:46.943 SGL Keyed: Not Supported 00:16:46.943 SGL Bit Bucket Descriptor: Not Supported 00:16:46.943 SGL Metadata Pointer: Not Supported 00:16:46.943 Oversized SGL: Not Supported 00:16:46.943 SGL Metadata Address: Not Supported 00:16:46.943 SGL Offset: Not Supported 00:16:46.943 Transport SGL Data Block: Not Supported 00:16:46.943 Replay Protected Memory Block: Not Supported 00:16:46.943 00:16:46.943 Firmware Slot Information 00:16:46.943 ========================= 00:16:46.943 Active slot: 1 00:16:46.943 Slot 1 Firmware Revision: 25.01 00:16:46.943 00:16:46.943 00:16:46.943 Commands Supported and Effects 00:16:46.943 ============================== 00:16:46.943 Admin Commands 00:16:46.943 -------------- 00:16:46.943 Get Log Page (02h): Supported 00:16:46.943 Identify (06h): Supported 00:16:46.943 Abort (08h): Supported 00:16:46.943 Set Features (09h): Supported 00:16:46.943 Get Features (0Ah): Supported 00:16:46.943 Asynchronous Event Request (0Ch): Supported 00:16:46.943 Keep Alive (18h): Supported 00:16:46.943 I/O Commands 00:16:46.943 ------------ 00:16:46.943 Flush (00h): Supported LBA-Change 00:16:46.944 Write (01h): Supported LBA-Change 00:16:46.944 Read (02h): Supported 00:16:46.944 Compare (05h): Supported 00:16:46.944 Write Zeroes (08h): Supported LBA-Change 00:16:46.944 Dataset Management (09h): Supported LBA-Change 00:16:46.944 Copy (19h): Supported LBA-Change 00:16:46.944 00:16:46.944 Error Log 00:16:46.944 ========= 00:16:46.944 00:16:46.944 Arbitration 00:16:46.944 =========== 00:16:46.944 Arbitration Burst: 1 00:16:46.944 00:16:46.944 Power Management 00:16:46.944 ================ 00:16:46.944 Number of Power States: 1 00:16:46.944 Current Power State: Power State #0 00:16:46.944 Power State #0: 00:16:46.944 Max Power: 0.00 W 00:16:46.944 Non-Operational State: Operational 00:16:46.944 Entry Latency: Not Reported 00:16:46.944 Exit Latency: Not Reported 00:16:46.944 Relative Read Throughput: 0 00:16:46.944 Relative Read Latency: 0 00:16:46.944 Relative Write Throughput: 0 00:16:46.944 Relative Write Latency: 0 00:16:46.944 Idle Power: Not Reported 00:16:46.944 Active Power: Not Reported 00:16:46.944 Non-Operational Permissive Mode: Not Supported 00:16:46.944 00:16:46.944 Health Information 00:16:46.944 ================== 00:16:46.944 Critical Warnings: 00:16:46.944 Available Spare Space: OK 00:16:46.944 Temperature: OK 00:16:46.944 Device Reliability: OK 00:16:46.944 Read Only: No 00:16:46.944 Volatile Memory Backup: OK 00:16:46.944 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:46.944 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:46.944 Available Spare: 0% 00:16:46.944 Available Sp[2024-11-27 09:49:02.237263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:46.944 [2024-11-27 09:49:02.245164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:46.944 [2024-11-27 09:49:02.245189] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:46.944 [2024-11-27 09:49:02.245198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.944 [2024-11-27 09:49:02.245202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.944 [2024-11-27 09:49:02.245207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.944 [2024-11-27 09:49:02.245211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.944 [2024-11-27 09:49:02.245237] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:46.944 [2024-11-27 09:49:02.245245] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:46.944 [2024-11-27 09:49:02.246244] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:46.944 [2024-11-27 09:49:02.246280] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:46.944 [2024-11-27 09:49:02.246285] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:46.944 [2024-11-27 09:49:02.247246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:46.944 [2024-11-27 09:49:02.247255] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:46.944 [2024-11-27 09:49:02.247296] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:46.944 [2024-11-27 09:49:02.248265] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:46.944 are Threshold: 0% 00:16:46.944 Life Percentage Used: 0% 00:16:46.944 Data Units Read: 0 00:16:46.944 Data Units Written: 0 00:16:46.944 Host Read Commands: 0 00:16:46.944 Host Write Commands: 0 00:16:46.944 Controller Busy Time: 0 minutes 00:16:46.944 Power Cycles: 0 00:16:46.944 Power On Hours: 0 hours 00:16:46.944 Unsafe Shutdowns: 0 00:16:46.944 Unrecoverable Media Errors: 0 00:16:46.944 Lifetime Error Log Entries: 0 00:16:46.944 Warning Temperature Time: 0 minutes 00:16:46.944 Critical Temperature Time: 0 minutes 00:16:46.944 00:16:46.944 Number of Queues 00:16:46.944 ================ 00:16:46.944 Number of I/O Submission Queues: 127 00:16:46.944 Number of I/O Completion Queues: 127 00:16:46.944 00:16:46.944 Active Namespaces 00:16:46.944 ================= 00:16:46.944 Namespace ID:1 00:16:46.944 Error Recovery Timeout: Unlimited 00:16:46.944 Command Set Identifier: NVM (00h) 00:16:46.944 Deallocate: Supported 00:16:46.944 Deallocated/Unwritten Error: Not Supported 00:16:46.944 Deallocated Read Value: Unknown 00:16:46.944 Deallocate in Write Zeroes: Not Supported 00:16:46.944 Deallocated Guard Field: 0xFFFF 00:16:46.944 Flush: Supported 00:16:46.944 Reservation: Supported 00:16:46.944 Namespace Sharing Capabilities: Multiple Controllers 00:16:46.944 Size (in LBAs): 131072 (0GiB) 00:16:46.944 Capacity (in LBAs): 131072 (0GiB) 00:16:46.944 Utilization (in LBAs): 131072 (0GiB) 00:16:46.944 NGUID: 347D3354A759467895FCA1D06574E9E3 00:16:46.944 UUID: 347d3354-a759-4678-95fc-a1d06574e9e3 00:16:46.944 Thin Provisioning: Not Supported 00:16:46.944 Per-NS Atomic Units: Yes 00:16:46.944 Atomic Boundary Size (Normal): 0 00:16:46.944 Atomic Boundary Size (PFail): 0 00:16:46.944 Atomic Boundary Offset: 0 00:16:46.944 Maximum Single Source Range Length: 65535 00:16:46.944 Maximum Copy Length: 65535 00:16:46.944 Maximum Source Range Count: 1 00:16:46.944 NGUID/EUI64 Never Reused: No 00:16:46.944 Namespace Write Protected: No 00:16:46.944 Number of LBA Formats: 1 00:16:46.944 Current LBA Format: LBA Format #00 00:16:46.944 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:46.944 00:16:46.944 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:47.204 [2024-11-27 09:49:02.435223] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:52.489 Initializing NVMe Controllers 00:16:52.490 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:52.490 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:52.490 Initialization complete. Launching workers. 00:16:52.490 ======================================================== 00:16:52.490 Latency(us) 00:16:52.490 Device Information : IOPS MiB/s Average min max 00:16:52.490 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40018.40 156.32 3198.59 842.94 8139.27 00:16:52.490 ======================================================== 00:16:52.490 Total : 40018.40 156.32 3198.59 842.94 8139.27 00:16:52.490 00:16:52.490 [2024-11-27 09:49:07.538358] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:52.490 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:52.490 [2024-11-27 09:49:07.729986] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:57.773 Initializing NVMe Controllers 00:16:57.773 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:57.773 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:57.773 Initialization complete. Launching workers. 00:16:57.773 ======================================================== 00:16:57.773 Latency(us) 00:16:57.773 Device Information : IOPS MiB/s Average min max 00:16:57.773 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39998.38 156.24 3200.00 849.33 9768.74 00:16:57.773 ======================================================== 00:16:57.773 Total : 39998.38 156.24 3200.00 849.33 9768.74 00:16:57.773 00:16:57.773 [2024-11-27 09:49:12.747862] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:57.773 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:57.773 [2024-11-27 09:49:12.950094] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:03.055 [2024-11-27 09:49:18.097239] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:03.055 Initializing NVMe Controllers 00:17:03.055 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:03.055 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:03.055 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:03.055 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:03.055 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:03.055 Initialization complete. Launching workers. 00:17:03.055 Starting thread on core 2 00:17:03.055 Starting thread on core 3 00:17:03.055 Starting thread on core 1 00:17:03.055 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:03.055 [2024-11-27 09:49:18.348194] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:06.352 [2024-11-27 09:49:21.401609] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:06.352 Initializing NVMe Controllers 00:17:06.352 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:06.352 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:06.352 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:06.352 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:06.352 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:06.352 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:06.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:06.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:06.352 Initialization complete. Launching workers. 00:17:06.352 Starting thread on core 1 with urgent priority queue 00:17:06.352 Starting thread on core 2 with urgent priority queue 00:17:06.352 Starting thread on core 3 with urgent priority queue 00:17:06.352 Starting thread on core 0 with urgent priority queue 00:17:06.352 SPDK bdev Controller (SPDK2 ) core 0: 11871.33 IO/s 8.42 secs/100000 ios 00:17:06.352 SPDK bdev Controller (SPDK2 ) core 1: 13480.67 IO/s 7.42 secs/100000 ios 00:17:06.352 SPDK bdev Controller (SPDK2 ) core 2: 12492.67 IO/s 8.00 secs/100000 ios 00:17:06.352 SPDK bdev Controller (SPDK2 ) core 3: 10576.67 IO/s 9.45 secs/100000 ios 00:17:06.352 ======================================================== 00:17:06.352 00:17:06.352 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:06.352 [2024-11-27 09:49:21.637504] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:06.352 Initializing NVMe Controllers 00:17:06.352 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:06.352 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:06.352 Namespace ID: 1 size: 0GB 00:17:06.352 Initialization complete. 00:17:06.352 INFO: using host memory buffer for IO 00:17:06.353 Hello world! 00:17:06.353 [2024-11-27 09:49:21.647573] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:06.353 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:06.613 [2024-11-27 09:49:21.886890] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:07.553 Initializing NVMe Controllers 00:17:07.553 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:07.553 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:07.553 Initialization complete. Launching workers. 00:17:07.553 submit (in ns) avg, min, max = 5718.8, 2820.8, 3999742.5 00:17:07.553 complete (in ns) avg, min, max = 16414.0, 1628.3, 3998069.2 00:17:07.553 00:17:07.553 Submit histogram 00:17:07.553 ================ 00:17:07.553 Range in us Cumulative Count 00:17:07.553 2.813 - 2.827: 0.2259% ( 46) 00:17:07.553 2.827 - 2.840: 1.1641% ( 191) 00:17:07.553 2.840 - 2.853: 3.5218% ( 480) 00:17:07.553 2.853 - 2.867: 8.4287% ( 999) 00:17:07.553 2.867 - 2.880: 12.9427% ( 919) 00:17:07.553 2.880 - 2.893: 18.2917% ( 1089) 00:17:07.553 2.893 - 2.907: 23.2575% ( 1011) 00:17:07.553 2.907 - 2.920: 28.2872% ( 1024) 00:17:07.553 2.920 - 2.933: 34.0783% ( 1179) 00:17:07.553 2.933 - 2.947: 39.0835% ( 1019) 00:17:07.553 2.947 - 2.960: 44.7124% ( 1146) 00:17:07.553 2.960 - 2.973: 50.6508% ( 1209) 00:17:07.553 2.973 - 2.987: 58.4557% ( 1589) 00:17:07.553 2.987 - 3.000: 67.3805% ( 1817) 00:17:07.553 3.000 - 3.013: 76.9930% ( 1957) 00:17:07.553 3.013 - 3.027: 84.1495% ( 1457) 00:17:07.553 3.027 - 3.040: 89.8914% ( 1169) 00:17:07.553 3.040 - 3.053: 94.0027% ( 837) 00:17:07.553 3.053 - 3.067: 96.8908% ( 588) 00:17:07.553 3.067 - 3.080: 98.3447% ( 296) 00:17:07.553 3.080 - 3.093: 99.0913% ( 152) 00:17:07.553 3.093 - 3.107: 99.3909% ( 61) 00:17:07.553 3.107 - 3.120: 99.4990% ( 22) 00:17:07.553 3.120 - 3.133: 99.5285% ( 6) 00:17:07.553 3.133 - 3.147: 99.5383% ( 2) 00:17:07.553 3.147 - 3.160: 99.5481% ( 2) 00:17:07.553 3.213 - 3.227: 99.5530% ( 1) 00:17:07.553 3.307 - 3.320: 99.5579% ( 1) 00:17:07.553 3.347 - 3.360: 99.5628% ( 1) 00:17:07.553 3.400 - 3.413: 99.5678% ( 1) 00:17:07.553 3.467 - 3.493: 99.5727% ( 1) 00:17:07.553 3.707 - 3.733: 99.5776% ( 1) 00:17:07.553 4.027 - 4.053: 99.5825% ( 1) 00:17:07.553 4.053 - 4.080: 99.5874% ( 1) 00:17:07.554 4.133 - 4.160: 99.5972% ( 2) 00:17:07.554 4.373 - 4.400: 99.6021% ( 1) 00:17:07.554 4.400 - 4.427: 99.6071% ( 1) 00:17:07.554 4.480 - 4.507: 99.6120% ( 1) 00:17:07.554 4.613 - 4.640: 99.6218% ( 2) 00:17:07.554 4.640 - 4.667: 99.6267% ( 1) 00:17:07.554 4.773 - 4.800: 99.6463% ( 4) 00:17:07.554 4.827 - 4.853: 99.6513% ( 1) 00:17:07.554 4.853 - 4.880: 99.6611% ( 2) 00:17:07.554 4.880 - 4.907: 99.6709% ( 2) 00:17:07.554 4.907 - 4.933: 99.6807% ( 2) 00:17:07.554 4.933 - 4.960: 99.6856% ( 1) 00:17:07.554 4.960 - 4.987: 99.7053% ( 4) 00:17:07.554 4.987 - 5.013: 99.7151% ( 2) 00:17:07.554 5.067 - 5.093: 99.7249% ( 2) 00:17:07.554 5.093 - 5.120: 99.7446% ( 4) 00:17:07.554 5.120 - 5.147: 99.7544% ( 2) 00:17:07.554 5.147 - 5.173: 99.7593% ( 1) 00:17:07.554 5.253 - 5.280: 99.7642% ( 1) 00:17:07.554 5.520 - 5.547: 99.7691% ( 1) 00:17:07.554 5.547 - 5.573: 99.7741% ( 1) 00:17:07.554 5.627 - 5.653: 99.7888% ( 3) 00:17:07.554 5.707 - 5.733: 99.7937% ( 1) 00:17:07.554 5.760 - 5.787: 99.7986% ( 1) 00:17:07.554 5.867 - 5.893: 99.8035% ( 1) 00:17:07.554 5.893 - 5.920: 99.8084% ( 1) 00:17:07.554 6.080 - 6.107: 99.8183% ( 2) 00:17:07.554 6.107 - 6.133: 99.8232% ( 1) 00:17:07.554 6.133 - 6.160: 99.8379% ( 3) 00:17:07.554 6.320 - 6.347: 99.8428% ( 1) 00:17:07.554 6.373 - 6.400: 99.8477% ( 1) 00:17:07.554 6.533 - 6.560: 99.8526% ( 1) 00:17:07.554 6.587 - 6.613: 99.8576% ( 1) 00:17:07.554 6.613 - 6.640: 99.8625% ( 1) 00:17:07.554 6.667 - 6.693: 99.8674% ( 1) 00:17:07.554 6.880 - 6.933: 99.8723% ( 1) 00:17:07.554 6.933 - 6.987: 99.8772% ( 1) 00:17:07.554 7.040 - 7.093: 99.8821% ( 1) 00:17:07.554 7.093 - 7.147: 99.8870% ( 1) 00:17:07.554 7.520 - 7.573: 99.8919% ( 1) 00:17:07.554 7.680 - 7.733: 99.8969% ( 1) 00:17:07.554 8.533 - 8.587: 99.9018% ( 1) 00:17:07.554 8.587 - 8.640: 99.9067% ( 1) 00:17:07.554 8.693 - 8.747: 99.9116% ( 1) 00:17:07.554 9.067 - 9.120: 99.9165% ( 1) 00:17:07.554 [2024-11-27 09:49:22.980689] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:07.554 9.493 - 9.547: 99.9214% ( 1) 00:17:07.554 12.800 - 12.853: 99.9263% ( 1) 00:17:07.554 13.013 - 13.067: 99.9312% ( 1) 00:17:07.554 3986.773 - 4014.080: 100.0000% ( 14) 00:17:07.554 00:17:07.554 Complete histogram 00:17:07.554 ================== 00:17:07.554 Range in us Cumulative Count 00:17:07.554 1.627 - 1.633: 0.0098% ( 2) 00:17:07.554 1.633 - 1.640: 0.7613% ( 153) 00:17:07.554 1.640 - 1.647: 1.1199% ( 73) 00:17:07.554 1.647 - 1.653: 1.1641% ( 9) 00:17:07.554 1.653 - 1.660: 1.3655% ( 41) 00:17:07.554 1.660 - 1.667: 1.4490% ( 17) 00:17:07.554 1.667 - 1.673: 1.8714% ( 86) 00:17:07.554 1.673 - 1.680: 46.3824% ( 9062) 00:17:07.554 1.680 - 1.687: 53.3523% ( 1419) 00:17:07.554 1.687 - 1.693: 59.2809% ( 1207) 00:17:07.554 1.693 - 1.700: 72.0713% ( 2604) 00:17:07.554 1.700 - 1.707: 76.1531% ( 831) 00:17:07.554 1.707 - 1.720: 82.7300% ( 1339) 00:17:07.554 1.720 - 1.733: 84.3116% ( 322) 00:17:07.554 1.733 - 1.747: 88.4130% ( 835) 00:17:07.554 1.747 - 1.760: 93.9241% ( 1122) 00:17:07.554 1.760 - 1.773: 97.2101% ( 669) 00:17:07.554 1.773 - 1.787: 98.8555% ( 335) 00:17:07.554 1.787 - 1.800: 99.3467% ( 100) 00:17:07.554 1.800 - 1.813: 99.4597% ( 23) 00:17:07.554 1.813 - 1.827: 99.4744% ( 3) 00:17:07.554 1.960 - 1.973: 99.4793% ( 1) 00:17:07.554 3.267 - 3.280: 99.4843% ( 1) 00:17:07.554 3.280 - 3.293: 99.4892% ( 1) 00:17:07.554 3.333 - 3.347: 99.4941% ( 1) 00:17:07.554 3.627 - 3.653: 99.5088% ( 3) 00:17:07.554 3.707 - 3.733: 99.5137% ( 1) 00:17:07.554 3.787 - 3.813: 99.5236% ( 2) 00:17:07.554 3.813 - 3.840: 99.5285% ( 1) 00:17:07.554 3.840 - 3.867: 99.5334% ( 1) 00:17:07.554 3.973 - 4.000: 99.5383% ( 1) 00:17:07.554 4.000 - 4.027: 99.5432% ( 1) 00:17:07.554 4.213 - 4.240: 99.5481% ( 1) 00:17:07.554 4.400 - 4.427: 99.5530% ( 1) 00:17:07.554 4.507 - 4.533: 99.5579% ( 1) 00:17:07.554 4.560 - 4.587: 99.5628% ( 1) 00:17:07.554 4.853 - 4.880: 99.5727% ( 2) 00:17:07.554 5.093 - 5.120: 99.5776% ( 1) 00:17:07.554 5.280 - 5.307: 99.5874% ( 2) 00:17:07.554 5.600 - 5.627: 99.5923% ( 1) 00:17:07.554 5.813 - 5.840: 99.5972% ( 1) 00:17:07.554 5.973 - 6.000: 99.6021% ( 1) 00:17:07.554 6.053 - 6.080: 99.6071% ( 1) 00:17:07.554 6.133 - 6.160: 99.6120% ( 1) 00:17:07.554 6.240 - 6.267: 99.6169% ( 1) 00:17:07.554 6.480 - 6.507: 99.6218% ( 1) 00:17:07.554 9.173 - 9.227: 99.6267% ( 1) 00:17:07.554 9.227 - 9.280: 99.6316% ( 1) 00:17:07.554 3986.773 - 4014.080: 100.0000% ( 75) 00:17:07.554 00:17:07.554 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:07.554 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:07.554 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:07.554 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:07.554 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:07.815 [ 00:17:07.815 { 00:17:07.815 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:07.815 "subtype": "Discovery", 00:17:07.815 "listen_addresses": [], 00:17:07.815 "allow_any_host": true, 00:17:07.815 "hosts": [] 00:17:07.815 }, 00:17:07.815 { 00:17:07.815 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:07.815 "subtype": "NVMe", 00:17:07.815 "listen_addresses": [ 00:17:07.815 { 00:17:07.815 "trtype": "VFIOUSER", 00:17:07.815 "adrfam": "IPv4", 00:17:07.815 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:07.815 "trsvcid": "0" 00:17:07.815 } 00:17:07.815 ], 00:17:07.815 "allow_any_host": true, 00:17:07.815 "hosts": [], 00:17:07.815 "serial_number": "SPDK1", 00:17:07.815 "model_number": "SPDK bdev Controller", 00:17:07.815 "max_namespaces": 32, 00:17:07.815 "min_cntlid": 1, 00:17:07.815 "max_cntlid": 65519, 00:17:07.815 "namespaces": [ 00:17:07.815 { 00:17:07.815 "nsid": 1, 00:17:07.815 "bdev_name": "Malloc1", 00:17:07.815 "name": "Malloc1", 00:17:07.815 "nguid": "3AEFDA9974BE4EEBA4A94869A0BA62A9", 00:17:07.815 "uuid": "3aefda99-74be-4eeb-a4a9-4869a0ba62a9" 00:17:07.815 }, 00:17:07.815 { 00:17:07.815 "nsid": 2, 00:17:07.815 "bdev_name": "Malloc3", 00:17:07.815 "name": "Malloc3", 00:17:07.815 "nguid": "04B2A5C8B6BB4B50B851A084527486D6", 00:17:07.815 "uuid": "04b2a5c8-b6bb-4b50-b851-a084527486d6" 00:17:07.815 } 00:17:07.815 ] 00:17:07.815 }, 00:17:07.815 { 00:17:07.815 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:07.815 "subtype": "NVMe", 00:17:07.815 "listen_addresses": [ 00:17:07.815 { 00:17:07.815 "trtype": "VFIOUSER", 00:17:07.815 "adrfam": "IPv4", 00:17:07.815 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:07.815 "trsvcid": "0" 00:17:07.815 } 00:17:07.815 ], 00:17:07.815 "allow_any_host": true, 00:17:07.815 "hosts": [], 00:17:07.815 "serial_number": "SPDK2", 00:17:07.815 "model_number": "SPDK bdev Controller", 00:17:07.815 "max_namespaces": 32, 00:17:07.815 "min_cntlid": 1, 00:17:07.815 "max_cntlid": 65519, 00:17:07.815 "namespaces": [ 00:17:07.815 { 00:17:07.815 "nsid": 1, 00:17:07.815 "bdev_name": "Malloc2", 00:17:07.815 "name": "Malloc2", 00:17:07.815 "nguid": "347D3354A759467895FCA1D06574E9E3", 00:17:07.815 "uuid": "347d3354-a759-4678-95fc-a1d06574e9e3" 00:17:07.815 } 00:17:07.815 ] 00:17:07.815 } 00:17:07.815 ] 00:17:07.815 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:07.815 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3840270 00:17:07.815 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:07.815 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:07.815 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:07.815 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:07.815 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:07.815 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:07.815 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:07.815 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:08.076 [2024-11-27 09:49:23.356704] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:08.076 Malloc4 00:17:08.076 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:08.336 [2024-11-27 09:49:23.545009] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:08.336 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:08.336 Asynchronous Event Request test 00:17:08.336 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:08.336 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:08.336 Registering asynchronous event callbacks... 00:17:08.336 Starting namespace attribute notice tests for all controllers... 00:17:08.336 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:08.336 aer_cb - Changed Namespace 00:17:08.336 Cleaning up... 00:17:08.336 [ 00:17:08.336 { 00:17:08.336 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:08.336 "subtype": "Discovery", 00:17:08.336 "listen_addresses": [], 00:17:08.336 "allow_any_host": true, 00:17:08.336 "hosts": [] 00:17:08.336 }, 00:17:08.336 { 00:17:08.336 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:08.336 "subtype": "NVMe", 00:17:08.336 "listen_addresses": [ 00:17:08.336 { 00:17:08.336 "trtype": "VFIOUSER", 00:17:08.336 "adrfam": "IPv4", 00:17:08.336 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:08.336 "trsvcid": "0" 00:17:08.336 } 00:17:08.336 ], 00:17:08.336 "allow_any_host": true, 00:17:08.336 "hosts": [], 00:17:08.336 "serial_number": "SPDK1", 00:17:08.336 "model_number": "SPDK bdev Controller", 00:17:08.336 "max_namespaces": 32, 00:17:08.336 "min_cntlid": 1, 00:17:08.336 "max_cntlid": 65519, 00:17:08.336 "namespaces": [ 00:17:08.336 { 00:17:08.336 "nsid": 1, 00:17:08.336 "bdev_name": "Malloc1", 00:17:08.336 "name": "Malloc1", 00:17:08.336 "nguid": "3AEFDA9974BE4EEBA4A94869A0BA62A9", 00:17:08.336 "uuid": "3aefda99-74be-4eeb-a4a9-4869a0ba62a9" 00:17:08.336 }, 00:17:08.336 { 00:17:08.336 "nsid": 2, 00:17:08.336 "bdev_name": "Malloc3", 00:17:08.336 "name": "Malloc3", 00:17:08.336 "nguid": "04B2A5C8B6BB4B50B851A084527486D6", 00:17:08.336 "uuid": "04b2a5c8-b6bb-4b50-b851-a084527486d6" 00:17:08.336 } 00:17:08.336 ] 00:17:08.336 }, 00:17:08.336 { 00:17:08.336 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:08.336 "subtype": "NVMe", 00:17:08.336 "listen_addresses": [ 00:17:08.336 { 00:17:08.336 "trtype": "VFIOUSER", 00:17:08.336 "adrfam": "IPv4", 00:17:08.336 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:08.336 "trsvcid": "0" 00:17:08.336 } 00:17:08.336 ], 00:17:08.336 "allow_any_host": true, 00:17:08.336 "hosts": [], 00:17:08.336 "serial_number": "SPDK2", 00:17:08.336 "model_number": "SPDK bdev Controller", 00:17:08.336 "max_namespaces": 32, 00:17:08.336 "min_cntlid": 1, 00:17:08.336 "max_cntlid": 65519, 00:17:08.336 "namespaces": [ 00:17:08.336 { 00:17:08.336 "nsid": 1, 00:17:08.336 "bdev_name": "Malloc2", 00:17:08.336 "name": "Malloc2", 00:17:08.336 "nguid": "347D3354A759467895FCA1D06574E9E3", 00:17:08.336 "uuid": "347d3354-a759-4678-95fc-a1d06574e9e3" 00:17:08.336 }, 00:17:08.336 { 00:17:08.336 "nsid": 2, 00:17:08.336 "bdev_name": "Malloc4", 00:17:08.336 "name": "Malloc4", 00:17:08.336 "nguid": "50908FB08CFA4DD683B1D1A6555B7BF9", 00:17:08.336 "uuid": "50908fb0-8cfa-4dd6-83b1-d1a6555b7bf9" 00:17:08.336 } 00:17:08.336 ] 00:17:08.336 } 00:17:08.336 ] 00:17:08.336 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3840270 00:17:08.336 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:08.336 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3831313 00:17:08.336 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3831313 ']' 00:17:08.336 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3831313 00:17:08.336 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:08.336 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.336 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3831313 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3831313' 00:17:08.597 killing process with pid 3831313 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3831313 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3831313 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3840419 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3840419' 00:17:08.597 Process pid: 3840419 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3840419 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3840419 ']' 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.597 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:08.597 [2024-11-27 09:49:24.023471] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:08.597 [2024-11-27 09:49:24.024397] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:17:08.597 [2024-11-27 09:49:24.024446] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.882 [2024-11-27 09:49:24.107141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:08.882 [2024-11-27 09:49:24.137580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.882 [2024-11-27 09:49:24.137614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.882 [2024-11-27 09:49:24.137619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.882 [2024-11-27 09:49:24.137624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.882 [2024-11-27 09:49:24.137629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.882 [2024-11-27 09:49:24.138906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.882 [2024-11-27 09:49:24.139058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.882 [2024-11-27 09:49:24.139208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.882 [2024-11-27 09:49:24.139209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.882 [2024-11-27 09:49:24.190113] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:08.882 [2024-11-27 09:49:24.191173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:08.882 [2024-11-27 09:49:24.192132] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:08.882 [2024-11-27 09:49:24.192684] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:08.882 [2024-11-27 09:49:24.192708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:09.541 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.541 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:09.541 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:10.508 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:10.768 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:10.768 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:10.768 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:10.768 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:10.768 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:10.768 Malloc1 00:17:10.768 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:11.028 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:11.288 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:11.547 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:11.547 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:11.547 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:11.547 Malloc2 00:17:11.547 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:11.807 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:12.068 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3840419 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3840419 ']' 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3840419 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3840419 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3840419' 00:17:12.328 killing process with pid 3840419 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3840419 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3840419 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:12.328 00:17:12.328 real 0m50.953s 00:17:12.328 user 3m15.286s 00:17:12.328 sys 0m2.649s 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:12.328 ************************************ 00:17:12.328 END TEST nvmf_vfio_user 00:17:12.328 ************************************ 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.328 09:49:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.589 ************************************ 00:17:12.589 START TEST nvmf_vfio_user_nvme_compliance 00:17:12.589 ************************************ 00:17:12.589 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:12.589 * Looking for test storage... 00:17:12.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:12.589 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:12.589 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:17:12.589 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:12.589 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:12.589 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.589 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.589 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.589 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.589 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.589 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.589 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.590 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:12.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.590 --rc genhtml_branch_coverage=1 00:17:12.590 --rc genhtml_function_coverage=1 00:17:12.590 --rc genhtml_legend=1 00:17:12.590 --rc geninfo_all_blocks=1 00:17:12.590 --rc geninfo_unexecuted_blocks=1 00:17:12.590 00:17:12.590 ' 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:12.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.590 --rc genhtml_branch_coverage=1 00:17:12.590 --rc genhtml_function_coverage=1 00:17:12.590 --rc genhtml_legend=1 00:17:12.590 --rc geninfo_all_blocks=1 00:17:12.590 --rc geninfo_unexecuted_blocks=1 00:17:12.590 00:17:12.590 ' 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:12.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.590 --rc genhtml_branch_coverage=1 00:17:12.590 --rc genhtml_function_coverage=1 00:17:12.590 --rc genhtml_legend=1 00:17:12.590 --rc geninfo_all_blocks=1 00:17:12.590 --rc geninfo_unexecuted_blocks=1 00:17:12.590 00:17:12.590 ' 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:12.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.590 --rc genhtml_branch_coverage=1 00:17:12.590 --rc genhtml_function_coverage=1 00:17:12.590 --rc genhtml_legend=1 00:17:12.590 --rc geninfo_all_blocks=1 00:17:12.590 --rc geninfo_unexecuted_blocks=1 00:17:12.590 00:17:12.590 ' 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:12.590 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3841206 00:17:12.591 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3841206' 00:17:12.591 Process pid: 3841206 00:17:12.591 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:12.591 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:12.591 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3841206 00:17:12.591 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3841206 ']' 00:17:12.591 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.591 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.591 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.591 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.591 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:12.852 [2024-11-27 09:49:28.103710] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:17:12.852 [2024-11-27 09:49:28.103786] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.852 [2024-11-27 09:49:28.190117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:12.852 [2024-11-27 09:49:28.225079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.852 [2024-11-27 09:49:28.225113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.852 [2024-11-27 09:49:28.225119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.852 [2024-11-27 09:49:28.225124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.852 [2024-11-27 09:49:28.225128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.852 [2024-11-27 09:49:28.226287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.852 [2024-11-27 09:49:28.226564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.852 [2024-11-27 09:49:28.226565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.792 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.792 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:17:13.792 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:14.730 malloc0 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.730 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:14.730 00:17:14.730 00:17:14.730 CUnit - A unit testing framework for C - Version 2.1-3 00:17:14.730 http://cunit.sourceforge.net/ 00:17:14.730 00:17:14.730 00:17:14.730 Suite: nvme_compliance 00:17:14.730 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-27 09:49:30.141577] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:14.730 [2024-11-27 09:49:30.142874] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:14.730 [2024-11-27 09:49:30.142885] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:14.730 [2024-11-27 09:49:30.142890] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:14.730 [2024-11-27 09:49:30.144590] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:14.730 passed 00:17:14.990 Test: admin_identify_ctrlr_verify_fused ...[2024-11-27 09:49:30.222095] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:14.990 [2024-11-27 09:49:30.225118] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:14.990 passed 00:17:14.990 Test: admin_identify_ns ...[2024-11-27 09:49:30.303654] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:14.990 [2024-11-27 09:49:30.364173] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:14.990 [2024-11-27 09:49:30.372170] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:14.990 [2024-11-27 09:49:30.393252] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:14.990 passed 00:17:15.250 Test: admin_get_features_mandatory_features ...[2024-11-27 09:49:30.467488] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.250 [2024-11-27 09:49:30.470506] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.250 passed 00:17:15.250 Test: admin_get_features_optional_features ...[2024-11-27 09:49:30.545944] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.250 [2024-11-27 09:49:30.548961] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.250 passed 00:17:15.250 Test: admin_set_features_number_of_queues ...[2024-11-27 09:49:30.624526] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.510 [2024-11-27 09:49:30.729246] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.510 passed 00:17:15.510 Test: admin_get_log_page_mandatory_logs ...[2024-11-27 09:49:30.805276] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.510 [2024-11-27 09:49:30.808302] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.510 passed 00:17:15.510 Test: admin_get_log_page_with_lpo ...[2024-11-27 09:49:30.882025] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.510 [2024-11-27 09:49:30.950167] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:15.510 [2024-11-27 09:49:30.963204] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.770 passed 00:17:15.770 Test: fabric_property_get ...[2024-11-27 09:49:31.037430] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.770 [2024-11-27 09:49:31.038637] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:15.770 [2024-11-27 09:49:31.040449] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.770 passed 00:17:15.770 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-27 09:49:31.115908] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.770 [2024-11-27 09:49:31.117105] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:15.770 [2024-11-27 09:49:31.119928] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.770 passed 00:17:15.770 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-27 09:49:31.194630] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:16.030 [2024-11-27 09:49:31.278170] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:16.030 [2024-11-27 09:49:31.294170] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:16.030 [2024-11-27 09:49:31.299245] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:16.030 passed 00:17:16.030 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-27 09:49:31.374294] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:16.030 [2024-11-27 09:49:31.375489] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:16.030 [2024-11-27 09:49:31.377310] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:16.030 passed 00:17:16.030 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-27 09:49:31.455535] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:16.289 [2024-11-27 09:49:31.532170] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:16.290 [2024-11-27 09:49:31.556164] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:16.290 [2024-11-27 09:49:31.561234] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:16.290 passed 00:17:16.290 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-27 09:49:31.633452] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:16.290 [2024-11-27 09:49:31.634650] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:16.290 [2024-11-27 09:49:31.634669] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:16.290 [2024-11-27 09:49:31.637478] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:16.290 passed 00:17:16.290 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-27 09:49:31.712526] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:16.550 [2024-11-27 09:49:31.808164] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:16.550 [2024-11-27 09:49:31.816162] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:16.550 [2024-11-27 09:49:31.824166] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:16.550 [2024-11-27 09:49:31.832163] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:16.550 [2024-11-27 09:49:31.861230] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:16.550 passed 00:17:16.550 Test: admin_create_io_sq_verify_pc ...[2024-11-27 09:49:31.932444] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:16.550 [2024-11-27 09:49:31.951173] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:16.550 [2024-11-27 09:49:31.968604] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:16.550 passed 00:17:16.810 Test: admin_create_io_qp_max_qps ...[2024-11-27 09:49:32.044040] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:17.751 [2024-11-27 09:49:33.145168] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:18.320 [2024-11-27 09:49:33.532948] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:18.320 passed 00:17:18.320 Test: admin_create_io_sq_shared_cq ...[2024-11-27 09:49:33.608535] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:18.320 [2024-11-27 09:49:33.741164] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:18.320 [2024-11-27 09:49:33.778208] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:18.580 passed 00:17:18.580 00:17:18.580 Run Summary: Type Total Ran Passed Failed Inactive 00:17:18.580 suites 1 1 n/a 0 0 00:17:18.580 tests 18 18 18 0 0 00:17:18.580 asserts 360 360 360 0 n/a 00:17:18.580 00:17:18.580 Elapsed time = 1.492 seconds 00:17:18.580 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3841206 00:17:18.580 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3841206 ']' 00:17:18.580 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3841206 00:17:18.580 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:18.580 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.580 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3841206 00:17:18.580 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:18.580 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:18.580 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3841206' 00:17:18.580 killing process with pid 3841206 00:17:18.580 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3841206 00:17:18.580 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3841206 00:17:18.580 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:18.580 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:18.580 00:17:18.580 real 0m6.189s 00:17:18.580 user 0m17.509s 00:17:18.580 sys 0m0.550s 00:17:18.580 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.580 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:18.580 ************************************ 00:17:18.580 END TEST nvmf_vfio_user_nvme_compliance 00:17:18.580 ************************************ 00:17:18.580 09:49:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:18.580 09:49:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:18.580 09:49:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.580 09:49:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:18.841 ************************************ 00:17:18.841 START TEST nvmf_vfio_user_fuzz 00:17:18.841 ************************************ 00:17:18.841 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:18.841 * Looking for test storage... 00:17:18.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:18.841 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:18.841 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:17:18.841 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:18.841 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:18.841 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.841 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.841 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.841 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.841 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.841 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.841 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:18.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.842 --rc genhtml_branch_coverage=1 00:17:18.842 --rc genhtml_function_coverage=1 00:17:18.842 --rc genhtml_legend=1 00:17:18.842 --rc geninfo_all_blocks=1 00:17:18.842 --rc geninfo_unexecuted_blocks=1 00:17:18.842 00:17:18.842 ' 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:18.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.842 --rc genhtml_branch_coverage=1 00:17:18.842 --rc genhtml_function_coverage=1 00:17:18.842 --rc genhtml_legend=1 00:17:18.842 --rc geninfo_all_blocks=1 00:17:18.842 --rc geninfo_unexecuted_blocks=1 00:17:18.842 00:17:18.842 ' 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:18.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.842 --rc genhtml_branch_coverage=1 00:17:18.842 --rc genhtml_function_coverage=1 00:17:18.842 --rc genhtml_legend=1 00:17:18.842 --rc geninfo_all_blocks=1 00:17:18.842 --rc geninfo_unexecuted_blocks=1 00:17:18.842 00:17:18.842 ' 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:18.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.842 --rc genhtml_branch_coverage=1 00:17:18.842 --rc genhtml_function_coverage=1 00:17:18.842 --rc genhtml_legend=1 00:17:18.842 --rc geninfo_all_blocks=1 00:17:18.842 --rc geninfo_unexecuted_blocks=1 00:17:18.842 00:17:18.842 ' 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.842 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3842598 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3842598' 00:17:19.103 Process pid: 3842598 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3842598 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3842598 ']' 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.103 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:20.042 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.042 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:17:20.042 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:20.983 malloc0 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:20.983 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:53.087 Fuzzing completed. Shutting down the fuzz application 00:17:53.087 00:17:53.087 Dumping successful admin opcodes: 00:17:53.087 9, 10, 00:17:53.087 Dumping successful io opcodes: 00:17:53.087 0, 00:17:53.087 NS: 0x20000081ef00 I/O qp, Total commands completed: 1428961, total successful commands: 5614, random_seed: 559063168 00:17:53.087 NS: 0x20000081ef00 admin qp, Total commands completed: 355680, total successful commands: 94, random_seed: 1657643968 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3842598 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3842598 ']' 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3842598 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3842598 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3842598' 00:17:53.087 killing process with pid 3842598 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3842598 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3842598 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:53.087 00:17:53.087 real 0m32.795s 00:17:53.087 user 0m37.648s 00:17:53.087 sys 0m24.616s 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 ************************************ 00:17:53.087 END TEST nvmf_vfio_user_fuzz 00:17:53.087 ************************************ 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 ************************************ 00:17:53.087 START TEST nvmf_auth_target 00:17:53.087 ************************************ 00:17:53.087 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:53.087 * Looking for test storage... 00:17:53.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:53.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.087 --rc genhtml_branch_coverage=1 00:17:53.087 --rc genhtml_function_coverage=1 00:17:53.087 --rc genhtml_legend=1 00:17:53.087 --rc geninfo_all_blocks=1 00:17:53.087 --rc geninfo_unexecuted_blocks=1 00:17:53.087 00:17:53.087 ' 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:53.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.087 --rc genhtml_branch_coverage=1 00:17:53.087 --rc genhtml_function_coverage=1 00:17:53.087 --rc genhtml_legend=1 00:17:53.087 --rc geninfo_all_blocks=1 00:17:53.087 --rc geninfo_unexecuted_blocks=1 00:17:53.087 00:17:53.087 ' 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:53.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.087 --rc genhtml_branch_coverage=1 00:17:53.087 --rc genhtml_function_coverage=1 00:17:53.087 --rc genhtml_legend=1 00:17:53.087 --rc geninfo_all_blocks=1 00:17:53.087 --rc geninfo_unexecuted_blocks=1 00:17:53.087 00:17:53.087 ' 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:53.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.087 --rc genhtml_branch_coverage=1 00:17:53.087 --rc genhtml_function_coverage=1 00:17:53.087 --rc genhtml_legend=1 00:17:53.087 --rc geninfo_all_blocks=1 00:17:53.087 --rc geninfo_unexecuted_blocks=1 00:17:53.087 00:17:53.087 ' 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.087 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:53.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:53.088 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.681 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:59.682 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:59.682 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:59.682 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:59.682 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:59.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:17:59.682 00:17:59.682 --- 10.0.0.2 ping statistics --- 00:17:59.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.682 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:17:59.682 00:17:59.682 --- 10.0.0.1 ping statistics --- 00:17:59.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.682 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3852584 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3852584 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3852584 ']' 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.683 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.683 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3852862 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ca2b45b3193bf1be3294cca30be26f512d823f24eadea503 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rzE 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ca2b45b3193bf1be3294cca30be26f512d823f24eadea503 0 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ca2b45b3193bf1be3294cca30be26f512d823f24eadea503 0 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ca2b45b3193bf1be3294cca30be26f512d823f24eadea503 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rzE 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rzE 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.rzE 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7a116edb3efdc36c2fb1bbe2eb76f9849d2d390e44a5752b47bb4ee00d595597 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ewz 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7a116edb3efdc36c2fb1bbe2eb76f9849d2d390e44a5752b47bb4ee00d595597 3 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7a116edb3efdc36c2fb1bbe2eb76f9849d2d390e44a5752b47bb4ee00d595597 3 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7a116edb3efdc36c2fb1bbe2eb76f9849d2d390e44a5752b47bb4ee00d595597 00:18:00.256 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:00.257 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:00.518 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ewz 00:18:00.518 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ewz 00:18:00.518 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ewz 00:18:00.518 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:00.518 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:00.518 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.518 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8bd7b72f249e423b486c3c71610d9408 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8Xl 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8bd7b72f249e423b486c3c71610d9408 1 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8bd7b72f249e423b486c3c71610d9408 1 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8bd7b72f249e423b486c3c71610d9408 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8Xl 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8Xl 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.8Xl 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7fbe4a76f9bad2bebee794d44fe7f90a7058aea029fa7aeb 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.iwW 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7fbe4a76f9bad2bebee794d44fe7f90a7058aea029fa7aeb 2 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7fbe4a76f9bad2bebee794d44fe7f90a7058aea029fa7aeb 2 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7fbe4a76f9bad2bebee794d44fe7f90a7058aea029fa7aeb 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.iwW 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.iwW 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.iwW 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2c91c40c9f363dd20ca0fdf9a3d6722719009e127b9efbf0 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.g31 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2c91c40c9f363dd20ca0fdf9a3d6722719009e127b9efbf0 2 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2c91c40c9f363dd20ca0fdf9a3d6722719009e127b9efbf0 2 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2c91c40c9f363dd20ca0fdf9a3d6722719009e127b9efbf0 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.g31 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.g31 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.g31 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=69008b1d468f2b87dd31764d2b6d934c 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.QU9 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 69008b1d468f2b87dd31764d2b6d934c 1 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 69008b1d468f2b87dd31764d2b6d934c 1 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=69008b1d468f2b87dd31764d2b6d934c 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:00.519 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.QU9 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.QU9 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.QU9 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=25b34764113743528383181fe3f2d3dc0802c7d267ed4fd468edf1d64ac47798 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bxt 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 25b34764113743528383181fe3f2d3dc0802c7d267ed4fd468edf1d64ac47798 3 00:18:00.780 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 25b34764113743528383181fe3f2d3dc0802c7d267ed4fd468edf1d64ac47798 3 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=25b34764113743528383181fe3f2d3dc0802c7d267ed4fd468edf1d64ac47798 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bxt 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bxt 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.bxt 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3852584 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3852584 ']' 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.781 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3852862 /var/tmp/host.sock 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3852862 ']' 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:01.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rzE 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.042 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.304 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.304 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.rzE 00:18:01.304 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.rzE 00:18:01.304 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ewz ]] 00:18:01.304 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ewz 00:18:01.304 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.304 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.304 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.304 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ewz 00:18:01.304 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ewz 00:18:01.565 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:01.565 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8Xl 00:18:01.565 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.565 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.565 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.565 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.8Xl 00:18:01.565 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.8Xl 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.iwW ]] 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iwW 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iwW 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iwW 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.g31 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.g31 00:18:01.826 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.g31 00:18:02.087 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.QU9 ]] 00:18:02.087 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QU9 00:18:02.087 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.087 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.087 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.087 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QU9 00:18:02.087 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QU9 00:18:02.348 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:02.348 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bxt 00:18:02.348 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.348 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.348 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.348 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.bxt 00:18:02.348 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.bxt 00:18:02.348 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:02.348 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:02.348 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.348 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.348 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:02.348 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:02.609 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:02.609 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.609 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:02.609 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:02.609 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:02.609 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.609 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.609 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.609 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.609 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.609 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.609 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.609 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.870 00:18:02.870 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.870 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.870 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.132 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.132 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.132 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.132 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.132 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.132 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.132 { 00:18:03.132 "cntlid": 1, 00:18:03.132 "qid": 0, 00:18:03.132 "state": "enabled", 00:18:03.132 "thread": "nvmf_tgt_poll_group_000", 00:18:03.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:03.132 "listen_address": { 00:18:03.132 "trtype": "TCP", 00:18:03.132 "adrfam": "IPv4", 00:18:03.132 "traddr": "10.0.0.2", 00:18:03.132 "trsvcid": "4420" 00:18:03.132 }, 00:18:03.132 "peer_address": { 00:18:03.132 "trtype": "TCP", 00:18:03.132 "adrfam": "IPv4", 00:18:03.132 "traddr": "10.0.0.1", 00:18:03.132 "trsvcid": "53264" 00:18:03.132 }, 00:18:03.132 "auth": { 00:18:03.132 "state": "completed", 00:18:03.132 "digest": "sha256", 00:18:03.132 "dhgroup": "null" 00:18:03.132 } 00:18:03.132 } 00:18:03.132 ]' 00:18:03.132 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.132 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.132 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.132 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:03.132 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.394 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.394 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.394 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.394 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:03.394 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.337 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.598 00:18:04.598 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.598 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.598 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.859 { 00:18:04.859 "cntlid": 3, 00:18:04.859 "qid": 0, 00:18:04.859 "state": "enabled", 00:18:04.859 "thread": "nvmf_tgt_poll_group_000", 00:18:04.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.859 "listen_address": { 00:18:04.859 "trtype": "TCP", 00:18:04.859 "adrfam": "IPv4", 00:18:04.859 "traddr": "10.0.0.2", 00:18:04.859 "trsvcid": "4420" 00:18:04.859 }, 00:18:04.859 "peer_address": { 00:18:04.859 "trtype": "TCP", 00:18:04.859 "adrfam": "IPv4", 00:18:04.859 "traddr": "10.0.0.1", 00:18:04.859 "trsvcid": "53288" 00:18:04.859 }, 00:18:04.859 "auth": { 00:18:04.859 "state": "completed", 00:18:04.859 "digest": "sha256", 00:18:04.859 "dhgroup": "null" 00:18:04.859 } 00:18:04.859 } 00:18:04.859 ]' 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.859 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.120 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:05.120 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:05.690 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.690 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.690 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.690 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.690 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.690 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.690 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:05.690 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:05.950 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:05.950 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.950 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:05.950 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:05.950 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:05.950 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.950 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.950 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.950 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.950 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.950 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.950 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.950 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.210 00:18:06.210 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.210 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.210 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.470 { 00:18:06.470 "cntlid": 5, 00:18:06.470 "qid": 0, 00:18:06.470 "state": "enabled", 00:18:06.470 "thread": "nvmf_tgt_poll_group_000", 00:18:06.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.470 "listen_address": { 00:18:06.470 "trtype": "TCP", 00:18:06.470 "adrfam": "IPv4", 00:18:06.470 "traddr": "10.0.0.2", 00:18:06.470 "trsvcid": "4420" 00:18:06.470 }, 00:18:06.470 "peer_address": { 00:18:06.470 "trtype": "TCP", 00:18:06.470 "adrfam": "IPv4", 00:18:06.470 "traddr": "10.0.0.1", 00:18:06.470 "trsvcid": "53316" 00:18:06.470 }, 00:18:06.470 "auth": { 00:18:06.470 "state": "completed", 00:18:06.470 "digest": "sha256", 00:18:06.470 "dhgroup": "null" 00:18:06.470 } 00:18:06.470 } 00:18:06.470 ]' 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.470 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.730 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:06.730 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:07.670 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.671 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.671 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.671 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:07.671 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.671 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.932 00:18:07.932 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.932 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.932 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.192 { 00:18:08.192 "cntlid": 7, 00:18:08.192 "qid": 0, 00:18:08.192 "state": "enabled", 00:18:08.192 "thread": "nvmf_tgt_poll_group_000", 00:18:08.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.192 "listen_address": { 00:18:08.192 "trtype": "TCP", 00:18:08.192 "adrfam": "IPv4", 00:18:08.192 "traddr": "10.0.0.2", 00:18:08.192 "trsvcid": "4420" 00:18:08.192 }, 00:18:08.192 "peer_address": { 00:18:08.192 "trtype": "TCP", 00:18:08.192 "adrfam": "IPv4", 00:18:08.192 "traddr": "10.0.0.1", 00:18:08.192 "trsvcid": "53352" 00:18:08.192 }, 00:18:08.192 "auth": { 00:18:08.192 "state": "completed", 00:18:08.192 "digest": "sha256", 00:18:08.192 "dhgroup": "null" 00:18:08.192 } 00:18:08.192 } 00:18:08.192 ]' 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.192 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.468 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:08.468 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:09.038 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.038 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.038 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.038 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.038 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.038 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.038 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.038 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.038 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.298 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:09.298 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.298 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:09.298 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:09.298 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.298 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.298 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.298 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.298 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.298 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.298 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.298 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.298 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.559 00:18:09.559 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.559 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.559 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.819 { 00:18:09.819 "cntlid": 9, 00:18:09.819 "qid": 0, 00:18:09.819 "state": "enabled", 00:18:09.819 "thread": "nvmf_tgt_poll_group_000", 00:18:09.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.819 "listen_address": { 00:18:09.819 "trtype": "TCP", 00:18:09.819 "adrfam": "IPv4", 00:18:09.819 "traddr": "10.0.0.2", 00:18:09.819 "trsvcid": "4420" 00:18:09.819 }, 00:18:09.819 "peer_address": { 00:18:09.819 "trtype": "TCP", 00:18:09.819 "adrfam": "IPv4", 00:18:09.819 "traddr": "10.0.0.1", 00:18:09.819 "trsvcid": "45690" 00:18:09.819 }, 00:18:09.819 "auth": { 00:18:09.819 "state": "completed", 00:18:09.819 "digest": "sha256", 00:18:09.819 "dhgroup": "ffdhe2048" 00:18:09.819 } 00:18:09.819 } 00:18:09.819 ]' 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.819 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.079 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:10.079 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:10.650 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.650 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.650 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.650 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.650 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.650 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.650 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:10.650 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:10.911 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:10.911 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.911 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:10.911 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:10.911 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:10.911 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.911 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.911 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.911 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.911 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.911 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.911 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.911 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.172 00:18:11.172 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.172 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.172 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.432 { 00:18:11.432 "cntlid": 11, 00:18:11.432 "qid": 0, 00:18:11.432 "state": "enabled", 00:18:11.432 "thread": "nvmf_tgt_poll_group_000", 00:18:11.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.432 "listen_address": { 00:18:11.432 "trtype": "TCP", 00:18:11.432 "adrfam": "IPv4", 00:18:11.432 "traddr": "10.0.0.2", 00:18:11.432 "trsvcid": "4420" 00:18:11.432 }, 00:18:11.432 "peer_address": { 00:18:11.432 "trtype": "TCP", 00:18:11.432 "adrfam": "IPv4", 00:18:11.432 "traddr": "10.0.0.1", 00:18:11.432 "trsvcid": "45716" 00:18:11.432 }, 00:18:11.432 "auth": { 00:18:11.432 "state": "completed", 00:18:11.432 "digest": "sha256", 00:18:11.432 "dhgroup": "ffdhe2048" 00:18:11.432 } 00:18:11.432 } 00:18:11.432 ]' 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.432 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.693 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:11.693 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:12.264 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.264 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.264 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.264 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.264 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.264 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.264 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:12.264 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:12.524 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:12.524 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.524 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:12.524 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:12.524 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.524 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.524 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.524 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.524 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.525 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.525 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.525 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.525 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.786 00:18:12.786 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.786 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.786 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.048 { 00:18:13.048 "cntlid": 13, 00:18:13.048 "qid": 0, 00:18:13.048 "state": "enabled", 00:18:13.048 "thread": "nvmf_tgt_poll_group_000", 00:18:13.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.048 "listen_address": { 00:18:13.048 "trtype": "TCP", 00:18:13.048 "adrfam": "IPv4", 00:18:13.048 "traddr": "10.0.0.2", 00:18:13.048 "trsvcid": "4420" 00:18:13.048 }, 00:18:13.048 "peer_address": { 00:18:13.048 "trtype": "TCP", 00:18:13.048 "adrfam": "IPv4", 00:18:13.048 "traddr": "10.0.0.1", 00:18:13.048 "trsvcid": "45752" 00:18:13.048 }, 00:18:13.048 "auth": { 00:18:13.048 "state": "completed", 00:18:13.048 "digest": "sha256", 00:18:13.048 "dhgroup": "ffdhe2048" 00:18:13.048 } 00:18:13.048 } 00:18:13.048 ]' 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.048 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.308 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:13.309 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:13.887 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.888 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.888 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.888 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.888 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.888 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.888 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:13.888 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:14.152 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:14.152 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.152 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:14.152 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:14.152 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:14.152 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.152 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:14.152 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.152 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.152 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.152 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.152 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.152 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.413 00:18:14.413 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.413 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.413 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.672 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.672 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.672 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.672 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.672 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.672 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.672 { 00:18:14.672 "cntlid": 15, 00:18:14.672 "qid": 0, 00:18:14.672 "state": "enabled", 00:18:14.672 "thread": "nvmf_tgt_poll_group_000", 00:18:14.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.672 "listen_address": { 00:18:14.672 "trtype": "TCP", 00:18:14.672 "adrfam": "IPv4", 00:18:14.672 "traddr": "10.0.0.2", 00:18:14.672 "trsvcid": "4420" 00:18:14.672 }, 00:18:14.672 "peer_address": { 00:18:14.672 "trtype": "TCP", 00:18:14.672 "adrfam": "IPv4", 00:18:14.672 "traddr": "10.0.0.1", 00:18:14.672 "trsvcid": "45770" 00:18:14.672 }, 00:18:14.672 "auth": { 00:18:14.672 "state": "completed", 00:18:14.672 "digest": "sha256", 00:18:14.672 "dhgroup": "ffdhe2048" 00:18:14.672 } 00:18:14.672 } 00:18:14.672 ]' 00:18:14.672 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.672 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.672 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.672 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.672 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.672 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.673 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.673 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.933 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:14.933 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:15.503 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.503 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.503 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.503 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.503 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.503 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.503 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.503 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:15.503 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:15.764 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:15.764 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.764 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:15.764 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:15.764 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:15.764 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.764 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.764 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.764 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.764 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.764 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.764 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.764 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.025 00:18:16.025 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.025 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.025 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.286 { 00:18:16.286 "cntlid": 17, 00:18:16.286 "qid": 0, 00:18:16.286 "state": "enabled", 00:18:16.286 "thread": "nvmf_tgt_poll_group_000", 00:18:16.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.286 "listen_address": { 00:18:16.286 "trtype": "TCP", 00:18:16.286 "adrfam": "IPv4", 00:18:16.286 "traddr": "10.0.0.2", 00:18:16.286 "trsvcid": "4420" 00:18:16.286 }, 00:18:16.286 "peer_address": { 00:18:16.286 "trtype": "TCP", 00:18:16.286 "adrfam": "IPv4", 00:18:16.286 "traddr": "10.0.0.1", 00:18:16.286 "trsvcid": "45782" 00:18:16.286 }, 00:18:16.286 "auth": { 00:18:16.286 "state": "completed", 00:18:16.286 "digest": "sha256", 00:18:16.286 "dhgroup": "ffdhe3072" 00:18:16.286 } 00:18:16.286 } 00:18:16.286 ]' 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.286 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.548 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:16.548 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:17.120 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.120 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.120 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.120 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.120 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.120 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.120 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:17.120 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:17.381 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:17.381 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.381 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:17.381 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:17.381 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.381 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.381 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.381 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.381 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.381 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.381 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.381 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.381 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.641 00:18:17.641 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.641 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.641 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.902 { 00:18:17.902 "cntlid": 19, 00:18:17.902 "qid": 0, 00:18:17.902 "state": "enabled", 00:18:17.902 "thread": "nvmf_tgt_poll_group_000", 00:18:17.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.902 "listen_address": { 00:18:17.902 "trtype": "TCP", 00:18:17.902 "adrfam": "IPv4", 00:18:17.902 "traddr": "10.0.0.2", 00:18:17.902 "trsvcid": "4420" 00:18:17.902 }, 00:18:17.902 "peer_address": { 00:18:17.902 "trtype": "TCP", 00:18:17.902 "adrfam": "IPv4", 00:18:17.902 "traddr": "10.0.0.1", 00:18:17.902 "trsvcid": "45816" 00:18:17.902 }, 00:18:17.902 "auth": { 00:18:17.902 "state": "completed", 00:18:17.902 "digest": "sha256", 00:18:17.902 "dhgroup": "ffdhe3072" 00:18:17.902 } 00:18:17.902 } 00:18:17.902 ]' 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.902 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.164 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:18.164 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:18.770 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.770 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.770 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.770 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.770 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.770 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.770 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.770 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:19.105 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:19.105 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.105 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:19.105 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:19.105 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.105 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.105 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.105 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.105 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.105 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.105 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.105 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.105 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.368 00:18:19.368 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.368 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.368 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.368 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.368 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.368 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.368 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.368 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.368 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.368 { 00:18:19.368 "cntlid": 21, 00:18:19.368 "qid": 0, 00:18:19.368 "state": "enabled", 00:18:19.368 "thread": "nvmf_tgt_poll_group_000", 00:18:19.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.368 "listen_address": { 00:18:19.368 "trtype": "TCP", 00:18:19.368 "adrfam": "IPv4", 00:18:19.368 "traddr": "10.0.0.2", 00:18:19.368 "trsvcid": "4420" 00:18:19.368 }, 00:18:19.368 "peer_address": { 00:18:19.368 "trtype": "TCP", 00:18:19.368 "adrfam": "IPv4", 00:18:19.368 "traddr": "10.0.0.1", 00:18:19.368 "trsvcid": "45852" 00:18:19.368 }, 00:18:19.368 "auth": { 00:18:19.368 "state": "completed", 00:18:19.368 "digest": "sha256", 00:18:19.368 "dhgroup": "ffdhe3072" 00:18:19.368 } 00:18:19.368 } 00:18:19.368 ]' 00:18:19.368 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.629 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.629 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.629 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:19.629 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.629 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.629 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.629 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.888 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:19.888 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:20.458 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.458 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.458 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.458 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.458 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.458 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.458 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:20.458 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:20.718 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:20.718 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.718 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:20.718 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:20.718 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.718 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.718 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:20.718 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.718 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.718 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.718 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.718 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.718 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.979 00:18:20.979 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.979 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.979 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.979 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.979 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.979 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.979 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.240 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.240 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.240 { 00:18:21.240 "cntlid": 23, 00:18:21.240 "qid": 0, 00:18:21.240 "state": "enabled", 00:18:21.240 "thread": "nvmf_tgt_poll_group_000", 00:18:21.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.240 "listen_address": { 00:18:21.240 "trtype": "TCP", 00:18:21.240 "adrfam": "IPv4", 00:18:21.240 "traddr": "10.0.0.2", 00:18:21.240 "trsvcid": "4420" 00:18:21.240 }, 00:18:21.240 "peer_address": { 00:18:21.240 "trtype": "TCP", 00:18:21.240 "adrfam": "IPv4", 00:18:21.240 "traddr": "10.0.0.1", 00:18:21.240 "trsvcid": "54688" 00:18:21.240 }, 00:18:21.240 "auth": { 00:18:21.240 "state": "completed", 00:18:21.240 "digest": "sha256", 00:18:21.240 "dhgroup": "ffdhe3072" 00:18:21.240 } 00:18:21.240 } 00:18:21.240 ]' 00:18:21.240 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.240 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.240 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.240 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:21.240 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.240 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.240 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.240 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.501 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:21.501 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:22.072 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.072 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.072 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.072 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.072 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.072 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.072 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.072 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:22.072 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:22.332 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:22.332 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.332 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:22.332 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:22.332 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:22.332 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.332 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.332 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.332 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.332 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.332 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.332 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.332 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.593 00:18:22.593 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.593 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.593 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.854 { 00:18:22.854 "cntlid": 25, 00:18:22.854 "qid": 0, 00:18:22.854 "state": "enabled", 00:18:22.854 "thread": "nvmf_tgt_poll_group_000", 00:18:22.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.854 "listen_address": { 00:18:22.854 "trtype": "TCP", 00:18:22.854 "adrfam": "IPv4", 00:18:22.854 "traddr": "10.0.0.2", 00:18:22.854 "trsvcid": "4420" 00:18:22.854 }, 00:18:22.854 "peer_address": { 00:18:22.854 "trtype": "TCP", 00:18:22.854 "adrfam": "IPv4", 00:18:22.854 "traddr": "10.0.0.1", 00:18:22.854 "trsvcid": "54724" 00:18:22.854 }, 00:18:22.854 "auth": { 00:18:22.854 "state": "completed", 00:18:22.854 "digest": "sha256", 00:18:22.854 "dhgroup": "ffdhe4096" 00:18:22.854 } 00:18:22.854 } 00:18:22.854 ]' 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.854 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.114 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:23.114 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:23.685 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.685 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.685 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.685 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.685 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.685 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.685 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:23.685 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:23.946 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:23.946 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.946 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:23.946 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:23.946 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:23.946 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.946 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.946 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.946 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.946 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.946 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.946 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.946 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.206 00:18:24.206 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.206 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.206 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.466 { 00:18:24.466 "cntlid": 27, 00:18:24.466 "qid": 0, 00:18:24.466 "state": "enabled", 00:18:24.466 "thread": "nvmf_tgt_poll_group_000", 00:18:24.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.466 "listen_address": { 00:18:24.466 "trtype": "TCP", 00:18:24.466 "adrfam": "IPv4", 00:18:24.466 "traddr": "10.0.0.2", 00:18:24.466 "trsvcid": "4420" 00:18:24.466 }, 00:18:24.466 "peer_address": { 00:18:24.466 "trtype": "TCP", 00:18:24.466 "adrfam": "IPv4", 00:18:24.466 "traddr": "10.0.0.1", 00:18:24.466 "trsvcid": "54752" 00:18:24.466 }, 00:18:24.466 "auth": { 00:18:24.466 "state": "completed", 00:18:24.466 "digest": "sha256", 00:18:24.466 "dhgroup": "ffdhe4096" 00:18:24.466 } 00:18:24.466 } 00:18:24.466 ]' 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.466 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.726 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:24.726 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:25.294 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.553 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.553 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.553 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.553 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.553 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.553 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.554 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.813 00:18:26.073 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.073 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.073 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.073 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.073 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.073 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.073 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.073 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.073 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.073 { 00:18:26.073 "cntlid": 29, 00:18:26.073 "qid": 0, 00:18:26.073 "state": "enabled", 00:18:26.073 "thread": "nvmf_tgt_poll_group_000", 00:18:26.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.073 "listen_address": { 00:18:26.073 "trtype": "TCP", 00:18:26.073 "adrfam": "IPv4", 00:18:26.073 "traddr": "10.0.0.2", 00:18:26.073 "trsvcid": "4420" 00:18:26.073 }, 00:18:26.073 "peer_address": { 00:18:26.073 "trtype": "TCP", 00:18:26.073 "adrfam": "IPv4", 00:18:26.073 "traddr": "10.0.0.1", 00:18:26.073 "trsvcid": "54780" 00:18:26.073 }, 00:18:26.073 "auth": { 00:18:26.073 "state": "completed", 00:18:26.073 "digest": "sha256", 00:18:26.073 "dhgroup": "ffdhe4096" 00:18:26.073 } 00:18:26.073 } 00:18:26.073 ]' 00:18:26.073 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.073 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.073 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.333 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:26.333 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.333 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.333 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.333 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.333 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:26.333 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.273 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.534 00:18:27.534 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.534 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.534 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.794 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.794 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.794 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.794 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.794 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.794 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.794 { 00:18:27.794 "cntlid": 31, 00:18:27.794 "qid": 0, 00:18:27.794 "state": "enabled", 00:18:27.794 "thread": "nvmf_tgt_poll_group_000", 00:18:27.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:27.794 "listen_address": { 00:18:27.794 "trtype": "TCP", 00:18:27.794 "adrfam": "IPv4", 00:18:27.794 "traddr": "10.0.0.2", 00:18:27.794 "trsvcid": "4420" 00:18:27.794 }, 00:18:27.794 "peer_address": { 00:18:27.794 "trtype": "TCP", 00:18:27.794 "adrfam": "IPv4", 00:18:27.794 "traddr": "10.0.0.1", 00:18:27.794 "trsvcid": "54798" 00:18:27.794 }, 00:18:27.794 "auth": { 00:18:27.794 "state": "completed", 00:18:27.794 "digest": "sha256", 00:18:27.794 "dhgroup": "ffdhe4096" 00:18:27.794 } 00:18:27.794 } 00:18:27.794 ]' 00:18:27.794 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.794 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.794 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.794 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:27.794 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.054 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.054 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.054 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.054 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:28.054 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:28.625 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.885 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.456 00:18:29.456 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.456 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.456 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.456 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.456 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.456 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.456 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.456 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.456 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.456 { 00:18:29.456 "cntlid": 33, 00:18:29.456 "qid": 0, 00:18:29.456 "state": "enabled", 00:18:29.456 "thread": "nvmf_tgt_poll_group_000", 00:18:29.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.456 "listen_address": { 00:18:29.456 "trtype": "TCP", 00:18:29.456 "adrfam": "IPv4", 00:18:29.456 "traddr": "10.0.0.2", 00:18:29.456 "trsvcid": "4420" 00:18:29.456 }, 00:18:29.456 "peer_address": { 00:18:29.456 "trtype": "TCP", 00:18:29.456 "adrfam": "IPv4", 00:18:29.456 "traddr": "10.0.0.1", 00:18:29.456 "trsvcid": "54824" 00:18:29.456 }, 00:18:29.456 "auth": { 00:18:29.456 "state": "completed", 00:18:29.456 "digest": "sha256", 00:18:29.456 "dhgroup": "ffdhe6144" 00:18:29.456 } 00:18:29.456 } 00:18:29.456 ]' 00:18:29.456 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.456 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.456 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.716 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:29.717 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.717 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.717 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.717 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.977 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:29.977 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:30.548 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.548 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.548 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.548 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.548 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.548 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.548 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:30.548 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:30.808 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:30.808 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.808 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:30.808 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:30.808 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.808 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.808 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.808 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.808 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.808 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.808 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.808 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.808 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.069 00:18:31.069 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.069 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.069 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.329 { 00:18:31.329 "cntlid": 35, 00:18:31.329 "qid": 0, 00:18:31.329 "state": "enabled", 00:18:31.329 "thread": "nvmf_tgt_poll_group_000", 00:18:31.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.329 "listen_address": { 00:18:31.329 "trtype": "TCP", 00:18:31.329 "adrfam": "IPv4", 00:18:31.329 "traddr": "10.0.0.2", 00:18:31.329 "trsvcid": "4420" 00:18:31.329 }, 00:18:31.329 "peer_address": { 00:18:31.329 "trtype": "TCP", 00:18:31.329 "adrfam": "IPv4", 00:18:31.329 "traddr": "10.0.0.1", 00:18:31.329 "trsvcid": "54964" 00:18:31.329 }, 00:18:31.329 "auth": { 00:18:31.329 "state": "completed", 00:18:31.329 "digest": "sha256", 00:18:31.329 "dhgroup": "ffdhe6144" 00:18:31.329 } 00:18:31.329 } 00:18:31.329 ]' 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.329 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.589 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:31.589 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:32.158 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.158 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.158 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.158 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.158 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.158 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.158 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:32.158 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:32.418 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:32.418 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.418 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:32.418 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:32.418 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:32.418 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.418 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.418 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.418 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.418 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.418 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.418 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.418 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.677 00:18:32.938 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.938 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.938 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.938 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.938 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.938 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.938 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.938 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.938 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.938 { 00:18:32.938 "cntlid": 37, 00:18:32.938 "qid": 0, 00:18:32.938 "state": "enabled", 00:18:32.938 "thread": "nvmf_tgt_poll_group_000", 00:18:32.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.938 "listen_address": { 00:18:32.938 "trtype": "TCP", 00:18:32.938 "adrfam": "IPv4", 00:18:32.938 "traddr": "10.0.0.2", 00:18:32.938 "trsvcid": "4420" 00:18:32.938 }, 00:18:32.938 "peer_address": { 00:18:32.938 "trtype": "TCP", 00:18:32.938 "adrfam": "IPv4", 00:18:32.938 "traddr": "10.0.0.1", 00:18:32.938 "trsvcid": "54984" 00:18:32.938 }, 00:18:32.938 "auth": { 00:18:32.938 "state": "completed", 00:18:32.938 "digest": "sha256", 00:18:32.938 "dhgroup": "ffdhe6144" 00:18:32.938 } 00:18:32.938 } 00:18:32.938 ]' 00:18:32.938 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.197 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.197 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.197 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:33.197 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.198 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.198 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.198 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.457 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:33.457 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:34.027 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.027 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.027 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.027 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.027 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.027 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.027 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:34.027 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:34.287 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:34.287 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.287 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:34.287 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:34.287 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.287 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.287 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:34.287 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.287 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.287 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.287 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.287 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.287 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.548 00:18:34.548 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.548 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.548 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.807 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.808 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.808 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.808 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.808 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.808 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.808 { 00:18:34.808 "cntlid": 39, 00:18:34.808 "qid": 0, 00:18:34.808 "state": "enabled", 00:18:34.808 "thread": "nvmf_tgt_poll_group_000", 00:18:34.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.808 "listen_address": { 00:18:34.808 "trtype": "TCP", 00:18:34.808 "adrfam": "IPv4", 00:18:34.808 "traddr": "10.0.0.2", 00:18:34.808 "trsvcid": "4420" 00:18:34.808 }, 00:18:34.808 "peer_address": { 00:18:34.808 "trtype": "TCP", 00:18:34.808 "adrfam": "IPv4", 00:18:34.808 "traddr": "10.0.0.1", 00:18:34.808 "trsvcid": "55006" 00:18:34.808 }, 00:18:34.808 "auth": { 00:18:34.808 "state": "completed", 00:18:34.808 "digest": "sha256", 00:18:34.808 "dhgroup": "ffdhe6144" 00:18:34.808 } 00:18:34.808 } 00:18:34.808 ]' 00:18:34.808 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.808 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.808 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.808 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:34.808 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.808 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.808 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.808 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.071 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:35.071 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:35.642 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.642 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.642 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.642 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.642 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.642 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.642 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.642 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:35.642 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:35.903 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:35.903 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.903 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:35.903 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:35.903 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:35.903 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.903 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.903 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.903 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.903 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.903 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.903 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.903 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.473 00:18:36.473 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.473 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.473 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.473 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.473 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.473 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.473 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.473 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.473 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.473 { 00:18:36.473 "cntlid": 41, 00:18:36.473 "qid": 0, 00:18:36.473 "state": "enabled", 00:18:36.473 "thread": "nvmf_tgt_poll_group_000", 00:18:36.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.473 "listen_address": { 00:18:36.473 "trtype": "TCP", 00:18:36.473 "adrfam": "IPv4", 00:18:36.473 "traddr": "10.0.0.2", 00:18:36.473 "trsvcid": "4420" 00:18:36.473 }, 00:18:36.473 "peer_address": { 00:18:36.473 "trtype": "TCP", 00:18:36.473 "adrfam": "IPv4", 00:18:36.473 "traddr": "10.0.0.1", 00:18:36.473 "trsvcid": "55018" 00:18:36.473 }, 00:18:36.473 "auth": { 00:18:36.473 "state": "completed", 00:18:36.473 "digest": "sha256", 00:18:36.473 "dhgroup": "ffdhe8192" 00:18:36.473 } 00:18:36.473 } 00:18:36.473 ]' 00:18:36.473 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.733 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.733 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.733 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.733 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.733 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.733 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.733 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.994 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:36.994 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:37.564 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.564 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.564 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.564 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.564 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.564 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.564 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:37.564 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:37.824 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:37.824 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.824 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:37.824 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:37.824 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:37.824 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.824 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.824 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.824 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.824 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.824 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.824 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.824 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.084 00:18:38.345 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.345 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.345 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.345 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.345 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.345 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.345 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.345 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.345 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.345 { 00:18:38.345 "cntlid": 43, 00:18:38.345 "qid": 0, 00:18:38.345 "state": "enabled", 00:18:38.345 "thread": "nvmf_tgt_poll_group_000", 00:18:38.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.345 "listen_address": { 00:18:38.345 "trtype": "TCP", 00:18:38.345 "adrfam": "IPv4", 00:18:38.345 "traddr": "10.0.0.2", 00:18:38.345 "trsvcid": "4420" 00:18:38.345 }, 00:18:38.345 "peer_address": { 00:18:38.345 "trtype": "TCP", 00:18:38.345 "adrfam": "IPv4", 00:18:38.345 "traddr": "10.0.0.1", 00:18:38.345 "trsvcid": "55046" 00:18:38.345 }, 00:18:38.345 "auth": { 00:18:38.345 "state": "completed", 00:18:38.345 "digest": "sha256", 00:18:38.345 "dhgroup": "ffdhe8192" 00:18:38.345 } 00:18:38.345 } 00:18:38.345 ]' 00:18:38.345 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.345 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.345 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.605 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:38.605 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.605 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.606 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.606 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.866 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:38.866 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:39.436 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.436 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.436 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.436 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.436 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.436 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.436 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:39.436 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:39.697 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:39.697 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.697 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:39.697 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:39.697 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:39.697 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.697 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.697 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.697 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.697 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.697 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.697 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.697 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.958 00:18:40.218 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.218 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.218 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.218 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.218 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.218 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.218 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.218 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.218 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.218 { 00:18:40.218 "cntlid": 45, 00:18:40.218 "qid": 0, 00:18:40.218 "state": "enabled", 00:18:40.218 "thread": "nvmf_tgt_poll_group_000", 00:18:40.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.218 "listen_address": { 00:18:40.218 "trtype": "TCP", 00:18:40.218 "adrfam": "IPv4", 00:18:40.218 "traddr": "10.0.0.2", 00:18:40.218 "trsvcid": "4420" 00:18:40.218 }, 00:18:40.218 "peer_address": { 00:18:40.218 "trtype": "TCP", 00:18:40.218 "adrfam": "IPv4", 00:18:40.218 "traddr": "10.0.0.1", 00:18:40.218 "trsvcid": "46226" 00:18:40.218 }, 00:18:40.218 "auth": { 00:18:40.218 "state": "completed", 00:18:40.218 "digest": "sha256", 00:18:40.218 "dhgroup": "ffdhe8192" 00:18:40.218 } 00:18:40.218 } 00:18:40.218 ]' 00:18:40.218 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.218 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.479 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.479 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.479 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.479 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.479 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.479 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.738 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:40.738 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:41.308 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.308 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.308 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.308 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.308 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.308 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.308 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:41.308 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:41.568 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:41.568 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.568 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:41.568 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:41.568 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:41.568 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.568 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:41.568 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.568 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.568 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.568 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:41.568 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.568 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.137 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.137 { 00:18:42.137 "cntlid": 47, 00:18:42.137 "qid": 0, 00:18:42.137 "state": "enabled", 00:18:42.137 "thread": "nvmf_tgt_poll_group_000", 00:18:42.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:42.137 "listen_address": { 00:18:42.137 "trtype": "TCP", 00:18:42.137 "adrfam": "IPv4", 00:18:42.137 "traddr": "10.0.0.2", 00:18:42.137 "trsvcid": "4420" 00:18:42.137 }, 00:18:42.137 "peer_address": { 00:18:42.137 "trtype": "TCP", 00:18:42.137 "adrfam": "IPv4", 00:18:42.137 "traddr": "10.0.0.1", 00:18:42.137 "trsvcid": "46254" 00:18:42.137 }, 00:18:42.137 "auth": { 00:18:42.137 "state": "completed", 00:18:42.137 "digest": "sha256", 00:18:42.137 "dhgroup": "ffdhe8192" 00:18:42.137 } 00:18:42.137 } 00:18:42.137 ]' 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.137 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.398 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.398 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.398 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.398 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:42.398 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.337 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.597 00:18:43.597 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.597 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.597 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.866 { 00:18:43.866 "cntlid": 49, 00:18:43.866 "qid": 0, 00:18:43.866 "state": "enabled", 00:18:43.866 "thread": "nvmf_tgt_poll_group_000", 00:18:43.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:43.866 "listen_address": { 00:18:43.866 "trtype": "TCP", 00:18:43.866 "adrfam": "IPv4", 00:18:43.866 "traddr": "10.0.0.2", 00:18:43.866 "trsvcid": "4420" 00:18:43.866 }, 00:18:43.866 "peer_address": { 00:18:43.866 "trtype": "TCP", 00:18:43.866 "adrfam": "IPv4", 00:18:43.866 "traddr": "10.0.0.1", 00:18:43.866 "trsvcid": "46276" 00:18:43.866 }, 00:18:43.866 "auth": { 00:18:43.866 "state": "completed", 00:18:43.866 "digest": "sha384", 00:18:43.866 "dhgroup": "null" 00:18:43.866 } 00:18:43.866 } 00:18:43.866 ]' 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.866 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.134 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:44.134 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:44.705 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.705 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.705 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.705 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.705 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.705 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.705 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:44.705 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:44.966 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:44.966 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.966 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:44.966 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:44.966 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:44.966 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.966 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.966 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.966 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.966 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.966 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.966 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.966 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.227 00:18:45.227 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.227 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.227 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.486 { 00:18:45.486 "cntlid": 51, 00:18:45.486 "qid": 0, 00:18:45.486 "state": "enabled", 00:18:45.486 "thread": "nvmf_tgt_poll_group_000", 00:18:45.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.486 "listen_address": { 00:18:45.486 "trtype": "TCP", 00:18:45.486 "adrfam": "IPv4", 00:18:45.486 "traddr": "10.0.0.2", 00:18:45.486 "trsvcid": "4420" 00:18:45.486 }, 00:18:45.486 "peer_address": { 00:18:45.486 "trtype": "TCP", 00:18:45.486 "adrfam": "IPv4", 00:18:45.486 "traddr": "10.0.0.1", 00:18:45.486 "trsvcid": "46304" 00:18:45.486 }, 00:18:45.486 "auth": { 00:18:45.486 "state": "completed", 00:18:45.486 "digest": "sha384", 00:18:45.486 "dhgroup": "null" 00:18:45.486 } 00:18:45.486 } 00:18:45.486 ]' 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.486 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.748 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:45.748 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:46.319 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.319 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.319 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.319 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.319 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.319 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.319 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:46.319 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:46.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:46.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:46.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:46.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:46.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.581 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.842 00:18:46.842 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.842 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.842 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.102 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.102 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.103 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.103 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.103 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.103 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.103 { 00:18:47.103 "cntlid": 53, 00:18:47.103 "qid": 0, 00:18:47.103 "state": "enabled", 00:18:47.103 "thread": "nvmf_tgt_poll_group_000", 00:18:47.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:47.103 "listen_address": { 00:18:47.103 "trtype": "TCP", 00:18:47.103 "adrfam": "IPv4", 00:18:47.103 "traddr": "10.0.0.2", 00:18:47.103 "trsvcid": "4420" 00:18:47.103 }, 00:18:47.103 "peer_address": { 00:18:47.103 "trtype": "TCP", 00:18:47.103 "adrfam": "IPv4", 00:18:47.103 "traddr": "10.0.0.1", 00:18:47.103 "trsvcid": "46328" 00:18:47.103 }, 00:18:47.103 "auth": { 00:18:47.103 "state": "completed", 00:18:47.103 "digest": "sha384", 00:18:47.103 "dhgroup": "null" 00:18:47.103 } 00:18:47.103 } 00:18:47.103 ]' 00:18:47.103 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.103 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.103 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.103 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:47.103 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.103 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.103 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.103 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.364 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:47.364 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:47.934 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.934 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.934 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.934 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.934 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.934 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.934 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:47.934 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:48.194 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:48.194 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.194 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:48.194 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:48.194 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:48.194 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.194 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:48.194 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.194 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.194 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.194 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:48.194 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.194 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.454 00:18:48.454 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.454 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.454 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.454 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.454 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.454 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.454 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.454 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.715 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.715 { 00:18:48.715 "cntlid": 55, 00:18:48.715 "qid": 0, 00:18:48.715 "state": "enabled", 00:18:48.715 "thread": "nvmf_tgt_poll_group_000", 00:18:48.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.715 "listen_address": { 00:18:48.715 "trtype": "TCP", 00:18:48.715 "adrfam": "IPv4", 00:18:48.715 "traddr": "10.0.0.2", 00:18:48.715 "trsvcid": "4420" 00:18:48.715 }, 00:18:48.715 "peer_address": { 00:18:48.715 "trtype": "TCP", 00:18:48.715 "adrfam": "IPv4", 00:18:48.715 "traddr": "10.0.0.1", 00:18:48.715 "trsvcid": "46362" 00:18:48.715 }, 00:18:48.715 "auth": { 00:18:48.715 "state": "completed", 00:18:48.715 "digest": "sha384", 00:18:48.715 "dhgroup": "null" 00:18:48.715 } 00:18:48.715 } 00:18:48.715 ]' 00:18:48.715 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.715 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.715 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.715 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:48.715 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.715 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.715 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.715 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.977 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:48.977 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:49.547 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.547 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.547 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.547 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.547 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.547 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.547 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.547 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:49.547 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:49.807 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:49.807 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.807 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:49.807 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:49.807 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:49.807 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.807 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.807 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.807 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.807 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.807 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.807 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.807 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.067 00:18:50.067 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.067 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.067 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.067 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.067 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.067 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.067 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.067 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.067 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.067 { 00:18:50.067 "cntlid": 57, 00:18:50.067 "qid": 0, 00:18:50.067 "state": "enabled", 00:18:50.067 "thread": "nvmf_tgt_poll_group_000", 00:18:50.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:50.067 "listen_address": { 00:18:50.067 "trtype": "TCP", 00:18:50.067 "adrfam": "IPv4", 00:18:50.067 "traddr": "10.0.0.2", 00:18:50.067 "trsvcid": "4420" 00:18:50.067 }, 00:18:50.067 "peer_address": { 00:18:50.067 "trtype": "TCP", 00:18:50.067 "adrfam": "IPv4", 00:18:50.067 "traddr": "10.0.0.1", 00:18:50.067 "trsvcid": "39374" 00:18:50.067 }, 00:18:50.067 "auth": { 00:18:50.067 "state": "completed", 00:18:50.067 "digest": "sha384", 00:18:50.067 "dhgroup": "ffdhe2048" 00:18:50.067 } 00:18:50.067 } 00:18:50.067 ]' 00:18:50.067 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.067 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.067 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.328 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.328 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.328 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.328 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.328 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.590 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:50.590 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:51.161 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.161 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.161 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.161 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.161 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.161 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.161 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:51.161 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:51.420 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:51.420 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.420 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:51.420 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:51.420 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:51.420 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.421 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.421 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.421 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.421 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.421 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.421 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.421 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.421 00:18:51.680 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.680 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.680 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.680 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.681 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.681 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.681 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.681 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.681 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.681 { 00:18:51.681 "cntlid": 59, 00:18:51.681 "qid": 0, 00:18:51.681 "state": "enabled", 00:18:51.681 "thread": "nvmf_tgt_poll_group_000", 00:18:51.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:51.681 "listen_address": { 00:18:51.681 "trtype": "TCP", 00:18:51.681 "adrfam": "IPv4", 00:18:51.681 "traddr": "10.0.0.2", 00:18:51.681 "trsvcid": "4420" 00:18:51.681 }, 00:18:51.681 "peer_address": { 00:18:51.681 "trtype": "TCP", 00:18:51.681 "adrfam": "IPv4", 00:18:51.681 "traddr": "10.0.0.1", 00:18:51.681 "trsvcid": "39410" 00:18:51.681 }, 00:18:51.681 "auth": { 00:18:51.681 "state": "completed", 00:18:51.681 "digest": "sha384", 00:18:51.681 "dhgroup": "ffdhe2048" 00:18:51.681 } 00:18:51.681 } 00:18:51.681 ]' 00:18:51.681 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.681 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.681 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.941 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.941 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.941 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.941 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.941 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.200 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:52.200 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:52.771 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.771 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.771 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.771 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.771 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.771 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.771 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:52.771 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:53.031 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:53.031 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.031 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:53.031 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:53.031 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:53.031 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.031 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.031 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.031 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.031 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.031 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.031 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.031 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.292 00:18:53.292 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.292 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.292 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.292 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.292 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.292 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.292 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.292 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.292 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.292 { 00:18:53.292 "cntlid": 61, 00:18:53.292 "qid": 0, 00:18:53.292 "state": "enabled", 00:18:53.292 "thread": "nvmf_tgt_poll_group_000", 00:18:53.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:53.292 "listen_address": { 00:18:53.292 "trtype": "TCP", 00:18:53.292 "adrfam": "IPv4", 00:18:53.292 "traddr": "10.0.0.2", 00:18:53.292 "trsvcid": "4420" 00:18:53.292 }, 00:18:53.292 "peer_address": { 00:18:53.292 "trtype": "TCP", 00:18:53.292 "adrfam": "IPv4", 00:18:53.292 "traddr": "10.0.0.1", 00:18:53.292 "trsvcid": "39440" 00:18:53.292 }, 00:18:53.292 "auth": { 00:18:53.292 "state": "completed", 00:18:53.292 "digest": "sha384", 00:18:53.292 "dhgroup": "ffdhe2048" 00:18:53.292 } 00:18:53.292 } 00:18:53.292 ]' 00:18:53.292 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.553 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.553 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.553 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:53.553 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.553 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.553 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.553 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.553 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:53.553 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.496 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.497 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.758 00:18:54.758 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.758 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.758 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.019 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.019 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.019 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.019 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.019 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.019 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.019 { 00:18:55.019 "cntlid": 63, 00:18:55.019 "qid": 0, 00:18:55.019 "state": "enabled", 00:18:55.019 "thread": "nvmf_tgt_poll_group_000", 00:18:55.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:55.019 "listen_address": { 00:18:55.019 "trtype": "TCP", 00:18:55.019 "adrfam": "IPv4", 00:18:55.019 "traddr": "10.0.0.2", 00:18:55.019 "trsvcid": "4420" 00:18:55.019 }, 00:18:55.019 "peer_address": { 00:18:55.019 "trtype": "TCP", 00:18:55.019 "adrfam": "IPv4", 00:18:55.019 "traddr": "10.0.0.1", 00:18:55.019 "trsvcid": "39472" 00:18:55.019 }, 00:18:55.019 "auth": { 00:18:55.019 "state": "completed", 00:18:55.019 "digest": "sha384", 00:18:55.019 "dhgroup": "ffdhe2048" 00:18:55.019 } 00:18:55.019 } 00:18:55.019 ]' 00:18:55.019 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.019 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.019 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.019 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:55.019 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.019 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.019 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.020 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.281 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:55.281 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:18:55.852 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.852 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.852 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.852 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.113 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.374 00:18:56.374 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.374 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.374 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.638 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.638 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.638 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.638 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.638 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.638 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.638 { 00:18:56.638 "cntlid": 65, 00:18:56.638 "qid": 0, 00:18:56.638 "state": "enabled", 00:18:56.638 "thread": "nvmf_tgt_poll_group_000", 00:18:56.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.638 "listen_address": { 00:18:56.638 "trtype": "TCP", 00:18:56.638 "adrfam": "IPv4", 00:18:56.638 "traddr": "10.0.0.2", 00:18:56.638 "trsvcid": "4420" 00:18:56.638 }, 00:18:56.638 "peer_address": { 00:18:56.638 "trtype": "TCP", 00:18:56.638 "adrfam": "IPv4", 00:18:56.638 "traddr": "10.0.0.1", 00:18:56.638 "trsvcid": "39500" 00:18:56.638 }, 00:18:56.638 "auth": { 00:18:56.638 "state": "completed", 00:18:56.638 "digest": "sha384", 00:18:56.638 "dhgroup": "ffdhe3072" 00:18:56.638 } 00:18:56.638 } 00:18:56.638 ]' 00:18:56.638 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.638 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.638 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.638 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.638 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.944 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.944 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.944 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.944 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:56.944 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:18:57.656 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.656 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.656 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.656 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.656 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.656 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.656 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:57.656 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:57.917 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:57.917 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.917 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:57.917 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:57.917 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:57.917 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.917 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.917 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.917 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.917 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.917 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.917 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.917 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.177 00:18:58.177 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.177 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.177 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.177 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.177 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.177 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.177 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.177 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.177 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.177 { 00:18:58.177 "cntlid": 67, 00:18:58.177 "qid": 0, 00:18:58.177 "state": "enabled", 00:18:58.177 "thread": "nvmf_tgt_poll_group_000", 00:18:58.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:58.177 "listen_address": { 00:18:58.177 "trtype": "TCP", 00:18:58.177 "adrfam": "IPv4", 00:18:58.177 "traddr": "10.0.0.2", 00:18:58.177 "trsvcid": "4420" 00:18:58.177 }, 00:18:58.177 "peer_address": { 00:18:58.177 "trtype": "TCP", 00:18:58.177 "adrfam": "IPv4", 00:18:58.177 "traddr": "10.0.0.1", 00:18:58.177 "trsvcid": "39530" 00:18:58.177 }, 00:18:58.177 "auth": { 00:18:58.177 "state": "completed", 00:18:58.177 "digest": "sha384", 00:18:58.177 "dhgroup": "ffdhe3072" 00:18:58.177 } 00:18:58.177 } 00:18:58.177 ]' 00:18:58.177 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.441 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.441 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.441 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.441 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.441 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.441 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.441 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:58.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:18:59.273 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.273 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.273 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.273 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.273 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.273 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.273 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.274 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.534 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:59.534 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.534 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:59.534 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:59.534 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:59.534 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.534 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.534 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.534 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.534 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.534 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.534 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.534 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.795 00:18:59.795 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.795 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.795 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.795 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.795 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.795 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.795 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.795 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.795 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.795 { 00:18:59.795 "cntlid": 69, 00:18:59.795 "qid": 0, 00:18:59.795 "state": "enabled", 00:18:59.795 "thread": "nvmf_tgt_poll_group_000", 00:18:59.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:59.795 "listen_address": { 00:18:59.795 "trtype": "TCP", 00:18:59.795 "adrfam": "IPv4", 00:18:59.795 "traddr": "10.0.0.2", 00:18:59.795 "trsvcid": "4420" 00:18:59.795 }, 00:18:59.795 "peer_address": { 00:18:59.795 "trtype": "TCP", 00:18:59.795 "adrfam": "IPv4", 00:18:59.795 "traddr": "10.0.0.1", 00:18:59.795 "trsvcid": "53902" 00:18:59.795 }, 00:18:59.795 "auth": { 00:18:59.795 "state": "completed", 00:18:59.795 "digest": "sha384", 00:18:59.795 "dhgroup": "ffdhe3072" 00:18:59.795 } 00:18:59.795 } 00:18:59.795 ]' 00:18:59.795 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.056 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.056 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.056 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:00.056 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.056 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.056 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.056 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.056 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:00.056 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.998 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.259 00:19:01.259 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.259 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.259 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.521 { 00:19:01.521 "cntlid": 71, 00:19:01.521 "qid": 0, 00:19:01.521 "state": "enabled", 00:19:01.521 "thread": "nvmf_tgt_poll_group_000", 00:19:01.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:01.521 "listen_address": { 00:19:01.521 "trtype": "TCP", 00:19:01.521 "adrfam": "IPv4", 00:19:01.521 "traddr": "10.0.0.2", 00:19:01.521 "trsvcid": "4420" 00:19:01.521 }, 00:19:01.521 "peer_address": { 00:19:01.521 "trtype": "TCP", 00:19:01.521 "adrfam": "IPv4", 00:19:01.521 "traddr": "10.0.0.1", 00:19:01.521 "trsvcid": "53934" 00:19:01.521 }, 00:19:01.521 "auth": { 00:19:01.521 "state": "completed", 00:19:01.521 "digest": "sha384", 00:19:01.521 "dhgroup": "ffdhe3072" 00:19:01.521 } 00:19:01.521 } 00:19:01.521 ]' 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.521 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.783 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:01.783 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:02.354 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.354 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.354 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.354 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.354 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.354 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.354 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.354 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:02.354 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:02.615 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:02.615 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.615 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:02.615 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:02.615 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:02.615 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.615 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.615 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.615 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.615 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.615 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.615 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.615 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.876 00:19:02.876 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.876 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.876 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.136 { 00:19:03.136 "cntlid": 73, 00:19:03.136 "qid": 0, 00:19:03.136 "state": "enabled", 00:19:03.136 "thread": "nvmf_tgt_poll_group_000", 00:19:03.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:03.136 "listen_address": { 00:19:03.136 "trtype": "TCP", 00:19:03.136 "adrfam": "IPv4", 00:19:03.136 "traddr": "10.0.0.2", 00:19:03.136 "trsvcid": "4420" 00:19:03.136 }, 00:19:03.136 "peer_address": { 00:19:03.136 "trtype": "TCP", 00:19:03.136 "adrfam": "IPv4", 00:19:03.136 "traddr": "10.0.0.1", 00:19:03.136 "trsvcid": "53946" 00:19:03.136 }, 00:19:03.136 "auth": { 00:19:03.136 "state": "completed", 00:19:03.136 "digest": "sha384", 00:19:03.136 "dhgroup": "ffdhe4096" 00:19:03.136 } 00:19:03.136 } 00:19:03.136 ]' 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.136 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.397 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:03.397 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:03.973 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.973 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.973 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.973 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.973 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.973 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.973 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:03.973 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:04.232 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:04.232 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.232 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:04.232 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:04.233 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:04.233 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.233 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.233 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.233 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.233 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.233 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.233 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.233 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.495 00:19:04.495 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.495 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.495 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.755 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.755 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.755 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.755 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.755 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.755 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.755 { 00:19:04.755 "cntlid": 75, 00:19:04.755 "qid": 0, 00:19:04.755 "state": "enabled", 00:19:04.755 "thread": "nvmf_tgt_poll_group_000", 00:19:04.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:04.755 "listen_address": { 00:19:04.755 "trtype": "TCP", 00:19:04.755 "adrfam": "IPv4", 00:19:04.755 "traddr": "10.0.0.2", 00:19:04.755 "trsvcid": "4420" 00:19:04.755 }, 00:19:04.755 "peer_address": { 00:19:04.755 "trtype": "TCP", 00:19:04.755 "adrfam": "IPv4", 00:19:04.755 "traddr": "10.0.0.1", 00:19:04.755 "trsvcid": "53964" 00:19:04.755 }, 00:19:04.755 "auth": { 00:19:04.755 "state": "completed", 00:19:04.755 "digest": "sha384", 00:19:04.755 "dhgroup": "ffdhe4096" 00:19:04.755 } 00:19:04.755 } 00:19:04.755 ]' 00:19:04.755 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.755 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.755 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.755 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:04.755 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.755 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.755 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.756 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.016 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:05.016 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:05.587 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.848 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.109 00:19:06.109 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.109 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.109 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.370 { 00:19:06.370 "cntlid": 77, 00:19:06.370 "qid": 0, 00:19:06.370 "state": "enabled", 00:19:06.370 "thread": "nvmf_tgt_poll_group_000", 00:19:06.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:06.370 "listen_address": { 00:19:06.370 "trtype": "TCP", 00:19:06.370 "adrfam": "IPv4", 00:19:06.370 "traddr": "10.0.0.2", 00:19:06.370 "trsvcid": "4420" 00:19:06.370 }, 00:19:06.370 "peer_address": { 00:19:06.370 "trtype": "TCP", 00:19:06.370 "adrfam": "IPv4", 00:19:06.370 "traddr": "10.0.0.1", 00:19:06.370 "trsvcid": "53978" 00:19:06.370 }, 00:19:06.370 "auth": { 00:19:06.370 "state": "completed", 00:19:06.370 "digest": "sha384", 00:19:06.370 "dhgroup": "ffdhe4096" 00:19:06.370 } 00:19:06.370 } 00:19:06.370 ]' 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.370 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.631 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:06.631 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:07.204 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.465 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.725 00:19:07.725 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.725 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.725 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.985 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.985 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.985 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.985 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.985 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.985 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.985 { 00:19:07.985 "cntlid": 79, 00:19:07.985 "qid": 0, 00:19:07.985 "state": "enabled", 00:19:07.985 "thread": "nvmf_tgt_poll_group_000", 00:19:07.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:07.985 "listen_address": { 00:19:07.985 "trtype": "TCP", 00:19:07.985 "adrfam": "IPv4", 00:19:07.985 "traddr": "10.0.0.2", 00:19:07.985 "trsvcid": "4420" 00:19:07.985 }, 00:19:07.985 "peer_address": { 00:19:07.985 "trtype": "TCP", 00:19:07.985 "adrfam": "IPv4", 00:19:07.985 "traddr": "10.0.0.1", 00:19:07.985 "trsvcid": "54010" 00:19:07.985 }, 00:19:07.985 "auth": { 00:19:07.985 "state": "completed", 00:19:07.985 "digest": "sha384", 00:19:07.985 "dhgroup": "ffdhe4096" 00:19:07.985 } 00:19:07.985 } 00:19:07.985 ]' 00:19:07.985 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.985 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.985 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.985 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:07.985 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.247 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.247 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.247 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.247 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:08.247 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.189 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.450 00:19:09.450 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.450 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.450 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.711 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.711 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.711 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.711 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.711 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.711 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.711 { 00:19:09.711 "cntlid": 81, 00:19:09.711 "qid": 0, 00:19:09.711 "state": "enabled", 00:19:09.711 "thread": "nvmf_tgt_poll_group_000", 00:19:09.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:09.711 "listen_address": { 00:19:09.711 "trtype": "TCP", 00:19:09.711 "adrfam": "IPv4", 00:19:09.711 "traddr": "10.0.0.2", 00:19:09.711 "trsvcid": "4420" 00:19:09.711 }, 00:19:09.711 "peer_address": { 00:19:09.711 "trtype": "TCP", 00:19:09.711 "adrfam": "IPv4", 00:19:09.711 "traddr": "10.0.0.1", 00:19:09.711 "trsvcid": "51764" 00:19:09.711 }, 00:19:09.711 "auth": { 00:19:09.711 "state": "completed", 00:19:09.711 "digest": "sha384", 00:19:09.711 "dhgroup": "ffdhe6144" 00:19:09.711 } 00:19:09.711 } 00:19:09.711 ]' 00:19:09.711 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.711 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.711 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.711 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:09.711 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.973 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.973 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.973 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.973 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:09.973 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.916 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.177 00:19:11.177 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.177 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.177 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.438 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.438 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.438 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.438 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.438 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.438 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.438 { 00:19:11.438 "cntlid": 83, 00:19:11.438 "qid": 0, 00:19:11.438 "state": "enabled", 00:19:11.438 "thread": "nvmf_tgt_poll_group_000", 00:19:11.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:11.438 "listen_address": { 00:19:11.438 "trtype": "TCP", 00:19:11.438 "adrfam": "IPv4", 00:19:11.438 "traddr": "10.0.0.2", 00:19:11.438 "trsvcid": "4420" 00:19:11.438 }, 00:19:11.438 "peer_address": { 00:19:11.438 "trtype": "TCP", 00:19:11.438 "adrfam": "IPv4", 00:19:11.438 "traddr": "10.0.0.1", 00:19:11.438 "trsvcid": "51786" 00:19:11.438 }, 00:19:11.438 "auth": { 00:19:11.438 "state": "completed", 00:19:11.438 "digest": "sha384", 00:19:11.438 "dhgroup": "ffdhe6144" 00:19:11.438 } 00:19:11.438 } 00:19:11.438 ]' 00:19:11.438 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.438 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.438 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.438 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.438 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.700 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.700 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.700 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.700 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:11.700 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:12.271 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.271 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.271 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.271 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.533 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.795 00:19:13.055 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.055 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.055 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.055 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.055 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.055 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.055 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.055 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.055 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.055 { 00:19:13.055 "cntlid": 85, 00:19:13.055 "qid": 0, 00:19:13.055 "state": "enabled", 00:19:13.055 "thread": "nvmf_tgt_poll_group_000", 00:19:13.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.055 "listen_address": { 00:19:13.055 "trtype": "TCP", 00:19:13.055 "adrfam": "IPv4", 00:19:13.055 "traddr": "10.0.0.2", 00:19:13.055 "trsvcid": "4420" 00:19:13.055 }, 00:19:13.055 "peer_address": { 00:19:13.055 "trtype": "TCP", 00:19:13.055 "adrfam": "IPv4", 00:19:13.055 "traddr": "10.0.0.1", 00:19:13.055 "trsvcid": "51804" 00:19:13.055 }, 00:19:13.055 "auth": { 00:19:13.055 "state": "completed", 00:19:13.055 "digest": "sha384", 00:19:13.055 "dhgroup": "ffdhe6144" 00:19:13.055 } 00:19:13.055 } 00:19:13.055 ]' 00:19:13.055 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.055 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.055 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.316 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.316 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.316 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.316 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.316 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.316 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:13.316 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.256 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.516 00:19:14.516 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.516 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.516 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.776 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.776 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.776 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.776 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.776 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.776 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.776 { 00:19:14.776 "cntlid": 87, 00:19:14.776 "qid": 0, 00:19:14.776 "state": "enabled", 00:19:14.776 "thread": "nvmf_tgt_poll_group_000", 00:19:14.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:14.776 "listen_address": { 00:19:14.776 "trtype": "TCP", 00:19:14.776 "adrfam": "IPv4", 00:19:14.776 "traddr": "10.0.0.2", 00:19:14.776 "trsvcid": "4420" 00:19:14.776 }, 00:19:14.776 "peer_address": { 00:19:14.776 "trtype": "TCP", 00:19:14.776 "adrfam": "IPv4", 00:19:14.776 "traddr": "10.0.0.1", 00:19:14.776 "trsvcid": "51844" 00:19:14.776 }, 00:19:14.776 "auth": { 00:19:14.776 "state": "completed", 00:19:14.776 "digest": "sha384", 00:19:14.776 "dhgroup": "ffdhe6144" 00:19:14.776 } 00:19:14.776 } 00:19:14.776 ]' 00:19:14.776 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.776 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:14.776 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.776 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:14.776 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.036 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.036 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.036 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.036 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:15.036 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.975 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.548 00:19:16.548 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.548 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.548 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.548 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.548 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.548 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.548 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.548 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.548 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.548 { 00:19:16.548 "cntlid": 89, 00:19:16.548 "qid": 0, 00:19:16.548 "state": "enabled", 00:19:16.548 "thread": "nvmf_tgt_poll_group_000", 00:19:16.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:16.548 "listen_address": { 00:19:16.548 "trtype": "TCP", 00:19:16.548 "adrfam": "IPv4", 00:19:16.548 "traddr": "10.0.0.2", 00:19:16.548 "trsvcid": "4420" 00:19:16.548 }, 00:19:16.548 "peer_address": { 00:19:16.548 "trtype": "TCP", 00:19:16.548 "adrfam": "IPv4", 00:19:16.548 "traddr": "10.0.0.1", 00:19:16.548 "trsvcid": "51866" 00:19:16.548 }, 00:19:16.548 "auth": { 00:19:16.548 "state": "completed", 00:19:16.548 "digest": "sha384", 00:19:16.548 "dhgroup": "ffdhe8192" 00:19:16.548 } 00:19:16.548 } 00:19:16.548 ]' 00:19:16.548 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.808 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.808 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.808 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.808 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.808 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.808 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.808 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.069 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:17.069 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:17.638 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.638 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.638 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.638 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.638 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.638 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.638 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:17.638 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:17.899 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:17.899 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.899 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:17.899 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:17.899 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:17.899 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.899 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.899 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.899 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.899 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.899 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.899 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.899 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.159 00:19:18.420 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.420 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.420 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.420 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.420 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.420 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.420 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.420 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.420 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.420 { 00:19:18.420 "cntlid": 91, 00:19:18.420 "qid": 0, 00:19:18.420 "state": "enabled", 00:19:18.420 "thread": "nvmf_tgt_poll_group_000", 00:19:18.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:18.420 "listen_address": { 00:19:18.420 "trtype": "TCP", 00:19:18.420 "adrfam": "IPv4", 00:19:18.420 "traddr": "10.0.0.2", 00:19:18.420 "trsvcid": "4420" 00:19:18.420 }, 00:19:18.420 "peer_address": { 00:19:18.420 "trtype": "TCP", 00:19:18.420 "adrfam": "IPv4", 00:19:18.420 "traddr": "10.0.0.1", 00:19:18.420 "trsvcid": "51898" 00:19:18.420 }, 00:19:18.420 "auth": { 00:19:18.420 "state": "completed", 00:19:18.420 "digest": "sha384", 00:19:18.420 "dhgroup": "ffdhe8192" 00:19:18.420 } 00:19:18.420 } 00:19:18.420 ]' 00:19:18.420 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.420 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.420 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.680 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.680 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.680 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.680 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.680 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.680 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:18.680 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:19.620 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.620 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.620 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.620 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.620 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.620 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.620 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:19.620 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:19.620 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:19.620 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.620 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:19.620 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:19.620 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:19.620 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.620 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.620 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.620 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.620 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.620 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.620 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.620 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.193 00:19:20.193 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.193 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.193 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.455 { 00:19:20.455 "cntlid": 93, 00:19:20.455 "qid": 0, 00:19:20.455 "state": "enabled", 00:19:20.455 "thread": "nvmf_tgt_poll_group_000", 00:19:20.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:20.455 "listen_address": { 00:19:20.455 "trtype": "TCP", 00:19:20.455 "adrfam": "IPv4", 00:19:20.455 "traddr": "10.0.0.2", 00:19:20.455 "trsvcid": "4420" 00:19:20.455 }, 00:19:20.455 "peer_address": { 00:19:20.455 "trtype": "TCP", 00:19:20.455 "adrfam": "IPv4", 00:19:20.455 "traddr": "10.0.0.1", 00:19:20.455 "trsvcid": "51868" 00:19:20.455 }, 00:19:20.455 "auth": { 00:19:20.455 "state": "completed", 00:19:20.455 "digest": "sha384", 00:19:20.455 "dhgroup": "ffdhe8192" 00:19:20.455 } 00:19:20.455 } 00:19:20.455 ]' 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.455 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.715 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:20.715 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:21.288 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.288 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.288 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.288 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.288 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.288 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.288 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:21.288 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:21.550 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:21.550 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.550 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:21.550 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:21.550 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:21.550 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.550 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:21.550 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.550 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.550 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.550 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:21.550 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.550 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.123 00:19:22.123 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.123 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.123 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.123 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.123 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.123 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.123 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.123 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.123 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.123 { 00:19:22.123 "cntlid": 95, 00:19:22.123 "qid": 0, 00:19:22.123 "state": "enabled", 00:19:22.123 "thread": "nvmf_tgt_poll_group_000", 00:19:22.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:22.123 "listen_address": { 00:19:22.123 "trtype": "TCP", 00:19:22.123 "adrfam": "IPv4", 00:19:22.123 "traddr": "10.0.0.2", 00:19:22.123 "trsvcid": "4420" 00:19:22.123 }, 00:19:22.123 "peer_address": { 00:19:22.123 "trtype": "TCP", 00:19:22.123 "adrfam": "IPv4", 00:19:22.123 "traddr": "10.0.0.1", 00:19:22.123 "trsvcid": "51902" 00:19:22.123 }, 00:19:22.123 "auth": { 00:19:22.123 "state": "completed", 00:19:22.123 "digest": "sha384", 00:19:22.123 "dhgroup": "ffdhe8192" 00:19:22.123 } 00:19:22.123 } 00:19:22.123 ]' 00:19:22.123 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.383 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.383 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.383 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.383 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.383 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.383 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.383 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.644 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:22.644 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:23.214 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.214 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.214 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.214 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.214 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.214 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:23.214 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.214 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.214 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:23.214 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:23.474 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:23.474 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.474 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:23.474 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:23.474 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:23.474 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.474 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.474 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.475 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.475 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.475 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.475 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.475 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.475 00:19:23.736 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.736 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.736 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.736 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.736 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.736 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.736 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.736 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.736 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.736 { 00:19:23.736 "cntlid": 97, 00:19:23.736 "qid": 0, 00:19:23.736 "state": "enabled", 00:19:23.736 "thread": "nvmf_tgt_poll_group_000", 00:19:23.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:23.736 "listen_address": { 00:19:23.736 "trtype": "TCP", 00:19:23.736 "adrfam": "IPv4", 00:19:23.736 "traddr": "10.0.0.2", 00:19:23.736 "trsvcid": "4420" 00:19:23.736 }, 00:19:23.736 "peer_address": { 00:19:23.736 "trtype": "TCP", 00:19:23.736 "adrfam": "IPv4", 00:19:23.736 "traddr": "10.0.0.1", 00:19:23.736 "trsvcid": "51914" 00:19:23.736 }, 00:19:23.736 "auth": { 00:19:23.736 "state": "completed", 00:19:23.736 "digest": "sha512", 00:19:23.736 "dhgroup": "null" 00:19:23.736 } 00:19:23.736 } 00:19:23.736 ]' 00:19:23.736 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.736 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.736 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:23.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:23.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:24.938 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.938 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.938 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.938 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.938 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.938 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.938 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:24.938 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:24.938 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:24.938 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.939 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:24.939 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:24.939 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:24.939 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.939 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.939 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.939 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.939 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.939 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.939 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.939 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.200 00:19:25.200 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.200 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.200 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.461 { 00:19:25.461 "cntlid": 99, 00:19:25.461 "qid": 0, 00:19:25.461 "state": "enabled", 00:19:25.461 "thread": "nvmf_tgt_poll_group_000", 00:19:25.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:25.461 "listen_address": { 00:19:25.461 "trtype": "TCP", 00:19:25.461 "adrfam": "IPv4", 00:19:25.461 "traddr": "10.0.0.2", 00:19:25.461 "trsvcid": "4420" 00:19:25.461 }, 00:19:25.461 "peer_address": { 00:19:25.461 "trtype": "TCP", 00:19:25.461 "adrfam": "IPv4", 00:19:25.461 "traddr": "10.0.0.1", 00:19:25.461 "trsvcid": "51946" 00:19:25.461 }, 00:19:25.461 "auth": { 00:19:25.461 "state": "completed", 00:19:25.461 "digest": "sha512", 00:19:25.461 "dhgroup": "null" 00:19:25.461 } 00:19:25.461 } 00:19:25.461 ]' 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.461 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.722 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:25.722 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:26.292 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.292 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.292 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.292 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.293 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.293 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.293 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:26.293 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:26.555 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:26.555 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.555 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:26.555 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:26.555 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:26.555 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.555 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.555 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.555 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.555 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.555 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.555 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.555 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.816 00:19:26.816 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.816 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.816 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.076 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.077 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.077 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.077 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.077 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.077 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.077 { 00:19:27.077 "cntlid": 101, 00:19:27.077 "qid": 0, 00:19:27.077 "state": "enabled", 00:19:27.077 "thread": "nvmf_tgt_poll_group_000", 00:19:27.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:27.077 "listen_address": { 00:19:27.077 "trtype": "TCP", 00:19:27.077 "adrfam": "IPv4", 00:19:27.077 "traddr": "10.0.0.2", 00:19:27.077 "trsvcid": "4420" 00:19:27.077 }, 00:19:27.077 "peer_address": { 00:19:27.077 "trtype": "TCP", 00:19:27.077 "adrfam": "IPv4", 00:19:27.077 "traddr": "10.0.0.1", 00:19:27.077 "trsvcid": "51966" 00:19:27.077 }, 00:19:27.077 "auth": { 00:19:27.077 "state": "completed", 00:19:27.077 "digest": "sha512", 00:19:27.077 "dhgroup": "null" 00:19:27.077 } 00:19:27.077 } 00:19:27.077 ]' 00:19:27.077 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.077 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.077 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.077 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:27.077 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.077 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.077 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.077 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.338 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:27.338 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:27.908 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.909 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.909 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.909 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.909 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.909 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.909 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:27.909 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:28.169 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:28.169 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.169 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:28.169 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:28.169 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:28.169 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.169 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:28.169 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.169 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.169 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.169 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:28.169 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.169 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.431 00:19:28.431 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.431 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.431 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.692 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.692 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.692 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.692 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.692 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.692 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.692 { 00:19:28.692 "cntlid": 103, 00:19:28.692 "qid": 0, 00:19:28.692 "state": "enabled", 00:19:28.692 "thread": "nvmf_tgt_poll_group_000", 00:19:28.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:28.692 "listen_address": { 00:19:28.692 "trtype": "TCP", 00:19:28.692 "adrfam": "IPv4", 00:19:28.692 "traddr": "10.0.0.2", 00:19:28.692 "trsvcid": "4420" 00:19:28.692 }, 00:19:28.692 "peer_address": { 00:19:28.692 "trtype": "TCP", 00:19:28.692 "adrfam": "IPv4", 00:19:28.692 "traddr": "10.0.0.1", 00:19:28.692 "trsvcid": "51984" 00:19:28.692 }, 00:19:28.692 "auth": { 00:19:28.692 "state": "completed", 00:19:28.692 "digest": "sha512", 00:19:28.692 "dhgroup": "null" 00:19:28.692 } 00:19:28.692 } 00:19:28.692 ]' 00:19:28.692 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.692 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.692 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.692 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:28.692 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.692 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.692 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.692 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.954 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:28.954 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:29.525 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.525 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.525 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.525 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.525 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.525 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.525 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.525 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:29.525 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:29.786 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:29.786 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.786 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:29.786 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:29.786 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:29.786 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.786 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.786 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.786 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.786 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.786 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.786 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.786 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.047 00:19:30.047 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.047 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.047 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.047 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.047 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.047 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.047 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.047 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.047 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.047 { 00:19:30.047 "cntlid": 105, 00:19:30.047 "qid": 0, 00:19:30.047 "state": "enabled", 00:19:30.047 "thread": "nvmf_tgt_poll_group_000", 00:19:30.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:30.047 "listen_address": { 00:19:30.047 "trtype": "TCP", 00:19:30.047 "adrfam": "IPv4", 00:19:30.047 "traddr": "10.0.0.2", 00:19:30.047 "trsvcid": "4420" 00:19:30.047 }, 00:19:30.047 "peer_address": { 00:19:30.047 "trtype": "TCP", 00:19:30.047 "adrfam": "IPv4", 00:19:30.047 "traddr": "10.0.0.1", 00:19:30.047 "trsvcid": "52588" 00:19:30.047 }, 00:19:30.047 "auth": { 00:19:30.047 "state": "completed", 00:19:30.047 "digest": "sha512", 00:19:30.047 "dhgroup": "ffdhe2048" 00:19:30.047 } 00:19:30.047 } 00:19:30.047 ]' 00:19:30.047 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.308 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.308 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.308 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:30.308 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.308 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.308 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.308 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.569 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:30.569 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:31.140 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.140 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.140 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.140 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.140 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.140 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.140 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.140 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.401 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:31.401 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.401 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:31.401 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:31.401 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:31.401 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.401 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.401 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.401 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.401 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.401 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.401 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.401 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.661 00:19:31.661 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.661 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.661 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.661 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.661 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.661 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.661 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.661 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.922 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.922 { 00:19:31.922 "cntlid": 107, 00:19:31.922 "qid": 0, 00:19:31.922 "state": "enabled", 00:19:31.922 "thread": "nvmf_tgt_poll_group_000", 00:19:31.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:31.922 "listen_address": { 00:19:31.922 "trtype": "TCP", 00:19:31.922 "adrfam": "IPv4", 00:19:31.922 "traddr": "10.0.0.2", 00:19:31.922 "trsvcid": "4420" 00:19:31.922 }, 00:19:31.922 "peer_address": { 00:19:31.922 "trtype": "TCP", 00:19:31.922 "adrfam": "IPv4", 00:19:31.922 "traddr": "10.0.0.1", 00:19:31.922 "trsvcid": "52608" 00:19:31.922 }, 00:19:31.922 "auth": { 00:19:31.922 "state": "completed", 00:19:31.922 "digest": "sha512", 00:19:31.922 "dhgroup": "ffdhe2048" 00:19:31.922 } 00:19:31.922 } 00:19:31.922 ]' 00:19:31.922 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.922 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.922 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.922 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.922 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.922 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.922 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.922 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.183 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:32.183 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:32.753 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.753 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.753 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.753 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.753 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.753 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.753 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.753 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:33.014 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:33.014 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.014 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:33.014 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:33.014 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:33.014 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.014 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.014 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.014 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.014 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.014 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.014 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.014 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.275 00:19:33.275 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.275 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.275 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.275 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.275 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.275 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.275 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.275 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.275 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.275 { 00:19:33.275 "cntlid": 109, 00:19:33.275 "qid": 0, 00:19:33.275 "state": "enabled", 00:19:33.275 "thread": "nvmf_tgt_poll_group_000", 00:19:33.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:33.275 "listen_address": { 00:19:33.275 "trtype": "TCP", 00:19:33.275 "adrfam": "IPv4", 00:19:33.275 "traddr": "10.0.0.2", 00:19:33.275 "trsvcid": "4420" 00:19:33.275 }, 00:19:33.275 "peer_address": { 00:19:33.275 "trtype": "TCP", 00:19:33.275 "adrfam": "IPv4", 00:19:33.275 "traddr": "10.0.0.1", 00:19:33.275 "trsvcid": "52632" 00:19:33.275 }, 00:19:33.275 "auth": { 00:19:33.275 "state": "completed", 00:19:33.275 "digest": "sha512", 00:19:33.275 "dhgroup": "ffdhe2048" 00:19:33.275 } 00:19:33.275 } 00:19:33.275 ]' 00:19:33.536 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.536 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.536 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.536 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:33.536 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.536 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.536 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.536 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.798 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:33.798 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:34.370 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.370 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.370 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.370 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.370 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.371 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.371 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.371 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.632 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:34.632 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.632 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:34.632 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:34.632 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:34.632 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.632 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:34.632 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.632 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.632 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.632 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:34.632 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:34.632 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:34.893 00:19:34.893 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.893 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.893 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.893 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.893 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.893 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.893 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.893 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.893 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.893 { 00:19:34.893 "cntlid": 111, 00:19:34.893 "qid": 0, 00:19:34.893 "state": "enabled", 00:19:34.893 "thread": "nvmf_tgt_poll_group_000", 00:19:34.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:34.893 "listen_address": { 00:19:34.893 "trtype": "TCP", 00:19:34.893 "adrfam": "IPv4", 00:19:34.893 "traddr": "10.0.0.2", 00:19:34.893 "trsvcid": "4420" 00:19:34.893 }, 00:19:34.893 "peer_address": { 00:19:34.893 "trtype": "TCP", 00:19:34.893 "adrfam": "IPv4", 00:19:34.893 "traddr": "10.0.0.1", 00:19:34.893 "trsvcid": "52652" 00:19:34.893 }, 00:19:34.893 "auth": { 00:19:34.893 "state": "completed", 00:19:34.893 "digest": "sha512", 00:19:34.893 "dhgroup": "ffdhe2048" 00:19:34.893 } 00:19:34.893 } 00:19:34.893 ]' 00:19:34.893 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.155 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.155 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.155 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.155 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.155 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.155 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.155 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.442 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:35.442 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.095 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.367 00:19:36.367 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.367 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.367 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.629 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.629 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.629 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.629 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.629 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.629 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.629 { 00:19:36.629 "cntlid": 113, 00:19:36.629 "qid": 0, 00:19:36.629 "state": "enabled", 00:19:36.629 "thread": "nvmf_tgt_poll_group_000", 00:19:36.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:36.629 "listen_address": { 00:19:36.629 "trtype": "TCP", 00:19:36.629 "adrfam": "IPv4", 00:19:36.629 "traddr": "10.0.0.2", 00:19:36.629 "trsvcid": "4420" 00:19:36.629 }, 00:19:36.629 "peer_address": { 00:19:36.629 "trtype": "TCP", 00:19:36.629 "adrfam": "IPv4", 00:19:36.629 "traddr": "10.0.0.1", 00:19:36.629 "trsvcid": "52684" 00:19:36.629 }, 00:19:36.629 "auth": { 00:19:36.629 "state": "completed", 00:19:36.629 "digest": "sha512", 00:19:36.629 "dhgroup": "ffdhe3072" 00:19:36.629 } 00:19:36.629 } 00:19:36.629 ]' 00:19:36.629 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.629 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.629 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.629 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.629 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.890 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.890 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.890 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.890 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:36.890 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:37.459 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.720 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.720 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.720 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.720 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.720 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.720 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.720 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.720 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:37.720 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.720 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:37.720 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:37.720 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.720 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.720 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.720 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.720 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.720 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.720 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.720 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.720 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.981 00:19:37.981 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.981 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.981 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.242 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.242 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.242 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.242 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.242 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.242 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.242 { 00:19:38.242 "cntlid": 115, 00:19:38.242 "qid": 0, 00:19:38.242 "state": "enabled", 00:19:38.242 "thread": "nvmf_tgt_poll_group_000", 00:19:38.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:38.242 "listen_address": { 00:19:38.242 "trtype": "TCP", 00:19:38.242 "adrfam": "IPv4", 00:19:38.242 "traddr": "10.0.0.2", 00:19:38.242 "trsvcid": "4420" 00:19:38.242 }, 00:19:38.242 "peer_address": { 00:19:38.242 "trtype": "TCP", 00:19:38.242 "adrfam": "IPv4", 00:19:38.242 "traddr": "10.0.0.1", 00:19:38.242 "trsvcid": "52712" 00:19:38.242 }, 00:19:38.242 "auth": { 00:19:38.242 "state": "completed", 00:19:38.242 "digest": "sha512", 00:19:38.242 "dhgroup": "ffdhe3072" 00:19:38.242 } 00:19:38.242 } 00:19:38.242 ]' 00:19:38.242 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.242 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.242 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.242 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.242 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.503 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.503 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.503 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.503 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:38.503 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:39.446 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.446 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.446 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.446 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.446 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.447 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.708 00:19:39.708 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.708 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.708 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.968 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.968 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.968 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.969 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.969 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.969 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.969 { 00:19:39.969 "cntlid": 117, 00:19:39.969 "qid": 0, 00:19:39.969 "state": "enabled", 00:19:39.969 "thread": "nvmf_tgt_poll_group_000", 00:19:39.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:39.969 "listen_address": { 00:19:39.969 "trtype": "TCP", 00:19:39.969 "adrfam": "IPv4", 00:19:39.969 "traddr": "10.0.0.2", 00:19:39.969 "trsvcid": "4420" 00:19:39.969 }, 00:19:39.969 "peer_address": { 00:19:39.969 "trtype": "TCP", 00:19:39.969 "adrfam": "IPv4", 00:19:39.969 "traddr": "10.0.0.1", 00:19:39.969 "trsvcid": "34258" 00:19:39.969 }, 00:19:39.969 "auth": { 00:19:39.969 "state": "completed", 00:19:39.969 "digest": "sha512", 00:19:39.969 "dhgroup": "ffdhe3072" 00:19:39.969 } 00:19:39.969 } 00:19:39.969 ]' 00:19:39.969 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.969 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.969 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.969 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:39.969 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.969 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.969 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.969 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.229 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:40.229 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:40.800 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.800 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.800 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.800 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.800 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.800 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.800 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:40.800 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:41.061 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:41.061 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.061 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:41.061 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:41.061 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:41.061 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.061 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:41.061 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.061 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.061 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.061 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:41.061 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.061 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.322 00:19:41.322 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.322 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.322 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.582 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.582 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.582 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.582 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.582 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.582 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.582 { 00:19:41.582 "cntlid": 119, 00:19:41.582 "qid": 0, 00:19:41.582 "state": "enabled", 00:19:41.582 "thread": "nvmf_tgt_poll_group_000", 00:19:41.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:41.582 "listen_address": { 00:19:41.582 "trtype": "TCP", 00:19:41.582 "adrfam": "IPv4", 00:19:41.582 "traddr": "10.0.0.2", 00:19:41.582 "trsvcid": "4420" 00:19:41.582 }, 00:19:41.582 "peer_address": { 00:19:41.582 "trtype": "TCP", 00:19:41.582 "adrfam": "IPv4", 00:19:41.582 "traddr": "10.0.0.1", 00:19:41.582 "trsvcid": "34292" 00:19:41.582 }, 00:19:41.582 "auth": { 00:19:41.582 "state": "completed", 00:19:41.582 "digest": "sha512", 00:19:41.582 "dhgroup": "ffdhe3072" 00:19:41.582 } 00:19:41.582 } 00:19:41.582 ]' 00:19:41.582 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.582 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.582 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.582 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.582 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.582 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.582 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.582 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.844 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:41.844 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:42.413 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.413 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.413 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.413 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.413 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.413 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.413 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.413 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:42.413 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:42.673 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:42.673 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.673 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:42.673 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:42.673 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.673 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.673 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.673 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.673 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.673 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.673 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.673 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.673 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.933 00:19:42.933 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.933 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.933 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.193 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.193 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.193 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.193 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.193 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.193 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.193 { 00:19:43.193 "cntlid": 121, 00:19:43.193 "qid": 0, 00:19:43.193 "state": "enabled", 00:19:43.193 "thread": "nvmf_tgt_poll_group_000", 00:19:43.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:43.193 "listen_address": { 00:19:43.193 "trtype": "TCP", 00:19:43.193 "adrfam": "IPv4", 00:19:43.193 "traddr": "10.0.0.2", 00:19:43.193 "trsvcid": "4420" 00:19:43.193 }, 00:19:43.193 "peer_address": { 00:19:43.193 "trtype": "TCP", 00:19:43.193 "adrfam": "IPv4", 00:19:43.193 "traddr": "10.0.0.1", 00:19:43.193 "trsvcid": "34324" 00:19:43.193 }, 00:19:43.193 "auth": { 00:19:43.193 "state": "completed", 00:19:43.193 "digest": "sha512", 00:19:43.193 "dhgroup": "ffdhe4096" 00:19:43.193 } 00:19:43.193 } 00:19:43.193 ]' 00:19:43.193 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.193 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.193 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.193 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:43.194 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.194 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.194 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.194 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.454 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:43.454 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:44.024 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.024 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.024 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.024 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.285 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.546 00:19:44.546 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.546 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.546 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.807 { 00:19:44.807 "cntlid": 123, 00:19:44.807 "qid": 0, 00:19:44.807 "state": "enabled", 00:19:44.807 "thread": "nvmf_tgt_poll_group_000", 00:19:44.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:44.807 "listen_address": { 00:19:44.807 "trtype": "TCP", 00:19:44.807 "adrfam": "IPv4", 00:19:44.807 "traddr": "10.0.0.2", 00:19:44.807 "trsvcid": "4420" 00:19:44.807 }, 00:19:44.807 "peer_address": { 00:19:44.807 "trtype": "TCP", 00:19:44.807 "adrfam": "IPv4", 00:19:44.807 "traddr": "10.0.0.1", 00:19:44.807 "trsvcid": "34354" 00:19:44.807 }, 00:19:44.807 "auth": { 00:19:44.807 "state": "completed", 00:19:44.807 "digest": "sha512", 00:19:44.807 "dhgroup": "ffdhe4096" 00:19:44.807 } 00:19:44.807 } 00:19:44.807 ]' 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.807 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.067 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:45.067 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:45.638 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.899 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.899 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.899 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.899 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.899 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.899 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.900 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.161 00:19:46.161 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.161 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.161 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.422 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.422 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.422 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.422 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.422 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.422 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.422 { 00:19:46.422 "cntlid": 125, 00:19:46.422 "qid": 0, 00:19:46.422 "state": "enabled", 00:19:46.422 "thread": "nvmf_tgt_poll_group_000", 00:19:46.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:46.422 "listen_address": { 00:19:46.422 "trtype": "TCP", 00:19:46.422 "adrfam": "IPv4", 00:19:46.422 "traddr": "10.0.0.2", 00:19:46.422 "trsvcid": "4420" 00:19:46.422 }, 00:19:46.422 "peer_address": { 00:19:46.422 "trtype": "TCP", 00:19:46.422 "adrfam": "IPv4", 00:19:46.422 "traddr": "10.0.0.1", 00:19:46.422 "trsvcid": "34376" 00:19:46.422 }, 00:19:46.422 "auth": { 00:19:46.422 "state": "completed", 00:19:46.422 "digest": "sha512", 00:19:46.422 "dhgroup": "ffdhe4096" 00:19:46.422 } 00:19:46.422 } 00:19:46.422 ]' 00:19:46.422 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.422 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.422 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.422 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:46.423 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.685 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.685 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.685 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.685 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:46.685 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.631 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.892 00:19:47.892 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.892 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.892 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.152 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.152 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.152 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.152 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.152 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.152 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.152 { 00:19:48.152 "cntlid": 127, 00:19:48.152 "qid": 0, 00:19:48.152 "state": "enabled", 00:19:48.152 "thread": "nvmf_tgt_poll_group_000", 00:19:48.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:48.152 "listen_address": { 00:19:48.152 "trtype": "TCP", 00:19:48.152 "adrfam": "IPv4", 00:19:48.152 "traddr": "10.0.0.2", 00:19:48.152 "trsvcid": "4420" 00:19:48.152 }, 00:19:48.152 "peer_address": { 00:19:48.152 "trtype": "TCP", 00:19:48.152 "adrfam": "IPv4", 00:19:48.152 "traddr": "10.0.0.1", 00:19:48.152 "trsvcid": "34416" 00:19:48.152 }, 00:19:48.152 "auth": { 00:19:48.152 "state": "completed", 00:19:48.152 "digest": "sha512", 00:19:48.152 "dhgroup": "ffdhe4096" 00:19:48.152 } 00:19:48.152 } 00:19:48.152 ]' 00:19:48.152 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.152 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.152 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.152 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.152 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.152 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.152 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.153 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.413 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:48.413 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:48.985 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.985 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.985 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.985 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.985 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.985 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.985 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.985 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:48.985 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:49.247 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:49.247 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.247 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:49.247 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:49.247 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:49.247 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.247 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.247 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.247 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.247 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.247 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.247 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.247 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.509 00:19:49.509 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.509 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.509 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.771 { 00:19:49.771 "cntlid": 129, 00:19:49.771 "qid": 0, 00:19:49.771 "state": "enabled", 00:19:49.771 "thread": "nvmf_tgt_poll_group_000", 00:19:49.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:49.771 "listen_address": { 00:19:49.771 "trtype": "TCP", 00:19:49.771 "adrfam": "IPv4", 00:19:49.771 "traddr": "10.0.0.2", 00:19:49.771 "trsvcid": "4420" 00:19:49.771 }, 00:19:49.771 "peer_address": { 00:19:49.771 "trtype": "TCP", 00:19:49.771 "adrfam": "IPv4", 00:19:49.771 "traddr": "10.0.0.1", 00:19:49.771 "trsvcid": "50836" 00:19:49.771 }, 00:19:49.771 "auth": { 00:19:49.771 "state": "completed", 00:19:49.771 "digest": "sha512", 00:19:49.771 "dhgroup": "ffdhe6144" 00:19:49.771 } 00:19:49.771 } 00:19:49.771 ]' 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.771 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.033 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:50.033 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:50.603 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.863 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:50.863 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.863 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.863 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.863 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.863 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:50.863 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:51.124 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:51.124 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.124 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:51.124 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:51.124 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:51.124 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.124 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.124 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.124 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.124 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.124 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.124 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.124 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.385 00:19:51.385 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.385 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.385 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.646 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.646 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.646 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.646 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.646 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.646 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.646 { 00:19:51.646 "cntlid": 131, 00:19:51.646 "qid": 0, 00:19:51.646 "state": "enabled", 00:19:51.646 "thread": "nvmf_tgt_poll_group_000", 00:19:51.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:51.646 "listen_address": { 00:19:51.646 "trtype": "TCP", 00:19:51.646 "adrfam": "IPv4", 00:19:51.646 "traddr": "10.0.0.2", 00:19:51.646 "trsvcid": "4420" 00:19:51.646 }, 00:19:51.646 "peer_address": { 00:19:51.646 "trtype": "TCP", 00:19:51.646 "adrfam": "IPv4", 00:19:51.646 "traddr": "10.0.0.1", 00:19:51.646 "trsvcid": "50858" 00:19:51.646 }, 00:19:51.646 "auth": { 00:19:51.646 "state": "completed", 00:19:51.646 "digest": "sha512", 00:19:51.646 "dhgroup": "ffdhe6144" 00:19:51.646 } 00:19:51.646 } 00:19:51.646 ]' 00:19:51.646 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.646 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.646 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.646 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:51.646 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.646 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.646 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.646 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.908 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:51.908 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:52.479 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.479 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:52.479 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.479 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.479 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.479 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.479 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.479 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.740 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:52.740 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.740 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:52.740 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:52.740 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:52.740 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.740 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.740 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.740 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.740 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.740 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.740 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.740 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.002 00:19:53.002 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.002 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.002 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.263 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.263 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.263 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.263 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.263 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.263 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.263 { 00:19:53.263 "cntlid": 133, 00:19:53.263 "qid": 0, 00:19:53.263 "state": "enabled", 00:19:53.263 "thread": "nvmf_tgt_poll_group_000", 00:19:53.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:53.263 "listen_address": { 00:19:53.263 "trtype": "TCP", 00:19:53.263 "adrfam": "IPv4", 00:19:53.263 "traddr": "10.0.0.2", 00:19:53.263 "trsvcid": "4420" 00:19:53.263 }, 00:19:53.263 "peer_address": { 00:19:53.263 "trtype": "TCP", 00:19:53.263 "adrfam": "IPv4", 00:19:53.263 "traddr": "10.0.0.1", 00:19:53.263 "trsvcid": "50886" 00:19:53.263 }, 00:19:53.263 "auth": { 00:19:53.263 "state": "completed", 00:19:53.263 "digest": "sha512", 00:19:53.263 "dhgroup": "ffdhe6144" 00:19:53.263 } 00:19:53.263 } 00:19:53.263 ]' 00:19:53.263 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.263 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.263 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.263 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.263 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.523 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.523 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.523 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.523 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:53.523 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.464 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.725 00:19:54.725 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.725 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.725 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.986 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.986 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.986 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.986 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.986 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.986 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.986 { 00:19:54.986 "cntlid": 135, 00:19:54.986 "qid": 0, 00:19:54.986 "state": "enabled", 00:19:54.986 "thread": "nvmf_tgt_poll_group_000", 00:19:54.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:54.986 "listen_address": { 00:19:54.986 "trtype": "TCP", 00:19:54.986 "adrfam": "IPv4", 00:19:54.986 "traddr": "10.0.0.2", 00:19:54.986 "trsvcid": "4420" 00:19:54.986 }, 00:19:54.986 "peer_address": { 00:19:54.986 "trtype": "TCP", 00:19:54.986 "adrfam": "IPv4", 00:19:54.986 "traddr": "10.0.0.1", 00:19:54.986 "trsvcid": "50920" 00:19:54.986 }, 00:19:54.986 "auth": { 00:19:54.986 "state": "completed", 00:19:54.986 "digest": "sha512", 00:19:54.986 "dhgroup": "ffdhe6144" 00:19:54.986 } 00:19:54.986 } 00:19:54.986 ]' 00:19:54.986 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.986 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.986 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.986 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:54.986 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.247 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.247 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.247 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.247 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:55.247 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:19:55.818 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.818 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:55.818 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.818 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.079 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.650 00:19:56.650 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.650 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.650 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.911 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.911 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.911 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.911 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.911 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.911 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.911 { 00:19:56.911 "cntlid": 137, 00:19:56.911 "qid": 0, 00:19:56.911 "state": "enabled", 00:19:56.911 "thread": "nvmf_tgt_poll_group_000", 00:19:56.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:56.911 "listen_address": { 00:19:56.911 "trtype": "TCP", 00:19:56.911 "adrfam": "IPv4", 00:19:56.911 "traddr": "10.0.0.2", 00:19:56.911 "trsvcid": "4420" 00:19:56.911 }, 00:19:56.912 "peer_address": { 00:19:56.912 "trtype": "TCP", 00:19:56.912 "adrfam": "IPv4", 00:19:56.912 "traddr": "10.0.0.1", 00:19:56.912 "trsvcid": "50936" 00:19:56.912 }, 00:19:56.912 "auth": { 00:19:56.912 "state": "completed", 00:19:56.912 "digest": "sha512", 00:19:56.912 "dhgroup": "ffdhe8192" 00:19:56.912 } 00:19:56.912 } 00:19:56.912 ]' 00:19:56.912 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.912 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.912 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.912 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.912 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.912 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.912 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.912 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.172 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:57.172 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:19:57.742 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.742 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.742 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.742 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.742 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.742 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.742 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.743 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:58.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:58.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:58.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:58.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.003 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.575 00:19:58.575 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.575 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.575 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.575 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.575 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.575 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.575 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.575 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.575 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.575 { 00:19:58.575 "cntlid": 139, 00:19:58.575 "qid": 0, 00:19:58.575 "state": "enabled", 00:19:58.575 "thread": "nvmf_tgt_poll_group_000", 00:19:58.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:58.575 "listen_address": { 00:19:58.575 "trtype": "TCP", 00:19:58.575 "adrfam": "IPv4", 00:19:58.575 "traddr": "10.0.0.2", 00:19:58.575 "trsvcid": "4420" 00:19:58.575 }, 00:19:58.575 "peer_address": { 00:19:58.575 "trtype": "TCP", 00:19:58.575 "adrfam": "IPv4", 00:19:58.575 "traddr": "10.0.0.1", 00:19:58.575 "trsvcid": "50958" 00:19:58.575 }, 00:19:58.575 "auth": { 00:19:58.575 "state": "completed", 00:19:58.575 "digest": "sha512", 00:19:58.575 "dhgroup": "ffdhe8192" 00:19:58.575 } 00:19:58.575 } 00:19:58.575 ]' 00:19:58.575 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.575 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.575 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.837 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.837 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.837 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.837 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.837 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.097 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:59.097 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: --dhchap-ctrl-secret DHHC-1:02:N2ZiZTRhNzZmOWJhZDJiZWJlZTc5NGQ0NGZlN2Y5MGE3MDU4YWVhMDI5ZmE3YWVijWDYnA==: 00:19:59.667 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.667 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:59.667 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.667 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.667 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.667 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.667 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.667 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.929 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:59.929 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.929 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:59.929 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:59.929 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:59.929 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.929 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.929 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.929 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.929 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.929 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.929 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.929 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.189 00:20:00.190 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.190 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.190 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.450 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.450 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.450 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.450 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.450 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.450 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.450 { 00:20:00.450 "cntlid": 141, 00:20:00.450 "qid": 0, 00:20:00.450 "state": "enabled", 00:20:00.450 "thread": "nvmf_tgt_poll_group_000", 00:20:00.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:00.450 "listen_address": { 00:20:00.450 "trtype": "TCP", 00:20:00.450 "adrfam": "IPv4", 00:20:00.450 "traddr": "10.0.0.2", 00:20:00.450 "trsvcid": "4420" 00:20:00.450 }, 00:20:00.450 "peer_address": { 00:20:00.450 "trtype": "TCP", 00:20:00.450 "adrfam": "IPv4", 00:20:00.450 "traddr": "10.0.0.1", 00:20:00.450 "trsvcid": "39188" 00:20:00.450 }, 00:20:00.450 "auth": { 00:20:00.450 "state": "completed", 00:20:00.450 "digest": "sha512", 00:20:00.450 "dhgroup": "ffdhe8192" 00:20:00.450 } 00:20:00.450 } 00:20:00.450 ]' 00:20:00.450 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.450 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.450 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.710 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.710 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.710 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.710 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.710 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.710 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:20:00.710 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:01:NjkwMDhiMWQ0NjhmMmI4N2RkMzE3NjRkMmI2ZDkzNGPNazzn: 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.652 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.652 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.652 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:01.652 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.652 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.223 00:20:02.223 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.223 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.223 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.223 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.223 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.223 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.223 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.223 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.223 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.223 { 00:20:02.223 "cntlid": 143, 00:20:02.223 "qid": 0, 00:20:02.223 "state": "enabled", 00:20:02.223 "thread": "nvmf_tgt_poll_group_000", 00:20:02.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:02.223 "listen_address": { 00:20:02.223 "trtype": "TCP", 00:20:02.223 "adrfam": "IPv4", 00:20:02.223 "traddr": "10.0.0.2", 00:20:02.223 "trsvcid": "4420" 00:20:02.223 }, 00:20:02.223 "peer_address": { 00:20:02.223 "trtype": "TCP", 00:20:02.223 "adrfam": "IPv4", 00:20:02.223 "traddr": "10.0.0.1", 00:20:02.223 "trsvcid": "39228" 00:20:02.223 }, 00:20:02.223 "auth": { 00:20:02.223 "state": "completed", 00:20:02.223 "digest": "sha512", 00:20:02.223 "dhgroup": "ffdhe8192" 00:20:02.223 } 00:20:02.223 } 00:20:02.223 ]' 00:20:02.223 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.483 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.483 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.483 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.483 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.483 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.483 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.484 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.743 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:20:02.743 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:20:03.314 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.314 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:03.314 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.314 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.314 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.314 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:03.314 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:03.314 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:03.314 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:03.314 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:03.314 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:03.576 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:03.576 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.576 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:03.576 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:03.576 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:03.576 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.576 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.576 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.576 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.576 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.576 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.576 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.576 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.146 00:20:04.146 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.146 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.146 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.146 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.146 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.146 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.146 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.146 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.146 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.146 { 00:20:04.146 "cntlid": 145, 00:20:04.146 "qid": 0, 00:20:04.146 "state": "enabled", 00:20:04.146 "thread": "nvmf_tgt_poll_group_000", 00:20:04.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:04.146 "listen_address": { 00:20:04.146 "trtype": "TCP", 00:20:04.146 "adrfam": "IPv4", 00:20:04.146 "traddr": "10.0.0.2", 00:20:04.146 "trsvcid": "4420" 00:20:04.146 }, 00:20:04.146 "peer_address": { 00:20:04.146 "trtype": "TCP", 00:20:04.147 "adrfam": "IPv4", 00:20:04.147 "traddr": "10.0.0.1", 00:20:04.147 "trsvcid": "39264" 00:20:04.147 }, 00:20:04.147 "auth": { 00:20:04.147 "state": "completed", 00:20:04.147 "digest": "sha512", 00:20:04.147 "dhgroup": "ffdhe8192" 00:20:04.147 } 00:20:04.147 } 00:20:04.147 ]' 00:20:04.147 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.147 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.147 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.408 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.408 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.408 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.408 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.408 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.408 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:20:04.408 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:Y2EyYjQ1YjMxOTNiZjFiZTMyOTRjY2EzMGJlMjZmNTEyZDgyM2YyNGVhZGVhNTAz4+vL3Q==: --dhchap-ctrl-secret DHHC-1:03:N2ExMTZlZGIzZWZkYzM2YzJmYjFiYmUyZWI3NmY5ODQ5ZDJkMzkwZTQ0YTU3NTJiNDdiYjRlZTAwZDU5NTU5N6d8CyU=: 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:05.353 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:05.615 request: 00:20:05.615 { 00:20:05.615 "name": "nvme0", 00:20:05.615 "trtype": "tcp", 00:20:05.615 "traddr": "10.0.0.2", 00:20:05.615 "adrfam": "ipv4", 00:20:05.615 "trsvcid": "4420", 00:20:05.615 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:05.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:05.615 "prchk_reftag": false, 00:20:05.615 "prchk_guard": false, 00:20:05.615 "hdgst": false, 00:20:05.615 "ddgst": false, 00:20:05.615 "dhchap_key": "key2", 00:20:05.615 "allow_unrecognized_csi": false, 00:20:05.615 "method": "bdev_nvme_attach_controller", 00:20:05.615 "req_id": 1 00:20:05.615 } 00:20:05.615 Got JSON-RPC error response 00:20:05.615 response: 00:20:05.615 { 00:20:05.615 "code": -5, 00:20:05.615 "message": "Input/output error" 00:20:05.615 } 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.615 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:05.615 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.615 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.615 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.615 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:06.185 request: 00:20:06.185 { 00:20:06.185 "name": "nvme0", 00:20:06.185 "trtype": "tcp", 00:20:06.185 "traddr": "10.0.0.2", 00:20:06.185 "adrfam": "ipv4", 00:20:06.185 "trsvcid": "4420", 00:20:06.185 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:06.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:06.185 "prchk_reftag": false, 00:20:06.185 "prchk_guard": false, 00:20:06.185 "hdgst": false, 00:20:06.185 "ddgst": false, 00:20:06.185 "dhchap_key": "key1", 00:20:06.185 "dhchap_ctrlr_key": "ckey2", 00:20:06.185 "allow_unrecognized_csi": false, 00:20:06.185 "method": "bdev_nvme_attach_controller", 00:20:06.185 "req_id": 1 00:20:06.185 } 00:20:06.185 Got JSON-RPC error response 00:20:06.185 response: 00:20:06.185 { 00:20:06.185 "code": -5, 00:20:06.185 "message": "Input/output error" 00:20:06.185 } 00:20:06.185 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:06.185 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:06.185 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:06.185 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:06.185 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.185 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.185 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.185 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.185 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:06.185 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.186 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.186 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.186 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.186 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:06.186 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.186 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:06.186 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.186 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:06.186 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.186 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.186 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.186 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.446 request: 00:20:06.446 { 00:20:06.446 "name": "nvme0", 00:20:06.446 "trtype": "tcp", 00:20:06.446 "traddr": "10.0.0.2", 00:20:06.446 "adrfam": "ipv4", 00:20:06.446 "trsvcid": "4420", 00:20:06.446 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:06.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:06.446 "prchk_reftag": false, 00:20:06.446 "prchk_guard": false, 00:20:06.446 "hdgst": false, 00:20:06.446 "ddgst": false, 00:20:06.446 "dhchap_key": "key1", 00:20:06.446 "dhchap_ctrlr_key": "ckey1", 00:20:06.446 "allow_unrecognized_csi": false, 00:20:06.446 "method": "bdev_nvme_attach_controller", 00:20:06.446 "req_id": 1 00:20:06.446 } 00:20:06.446 Got JSON-RPC error response 00:20:06.446 response: 00:20:06.446 { 00:20:06.446 "code": -5, 00:20:06.446 "message": "Input/output error" 00:20:06.446 } 00:20:06.446 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:06.446 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:06.446 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:06.446 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:06.446 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.446 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.446 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.446 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.446 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3852584 00:20:06.446 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3852584 ']' 00:20:06.446 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3852584 00:20:06.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:06.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3852584 00:20:06.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3852584' 00:20:06.705 killing process with pid 3852584 00:20:06.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3852584 00:20:06.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3852584 00:20:06.705 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:06.705 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.705 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.705 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.705 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3878906 00:20:06.705 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3878906 00:20:06.705 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:06.705 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3878906 ']' 00:20:06.705 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.705 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.705 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.705 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.705 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3878906 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3878906 ']' 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.641 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.900 null0 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rzE 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ewz ]] 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ewz 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8Xl 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.iwW ]] 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iwW 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.g31 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.QU9 ]] 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QU9 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bxt 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:07.900 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.901 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:07.901 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:07.901 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.901 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.901 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:07.901 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.901 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.159 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.159 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:08.159 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.159 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.727 nvme0n1 00:20:08.727 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.727 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.727 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.987 { 00:20:08.987 "cntlid": 1, 00:20:08.987 "qid": 0, 00:20:08.987 "state": "enabled", 00:20:08.987 "thread": "nvmf_tgt_poll_group_000", 00:20:08.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:08.987 "listen_address": { 00:20:08.987 "trtype": "TCP", 00:20:08.987 "adrfam": "IPv4", 00:20:08.987 "traddr": "10.0.0.2", 00:20:08.987 "trsvcid": "4420" 00:20:08.987 }, 00:20:08.987 "peer_address": { 00:20:08.987 "trtype": "TCP", 00:20:08.987 "adrfam": "IPv4", 00:20:08.987 "traddr": "10.0.0.1", 00:20:08.987 "trsvcid": "39338" 00:20:08.987 }, 00:20:08.987 "auth": { 00:20:08.987 "state": "completed", 00:20:08.987 "digest": "sha512", 00:20:08.987 "dhgroup": "ffdhe8192" 00:20:08.987 } 00:20:08.987 } 00:20:08.987 ]' 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.987 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.247 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:20:09.247 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:20:09.817 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:10.078 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.079 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:10.079 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.079 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:10.079 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.079 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.340 request: 00:20:10.340 { 00:20:10.340 "name": "nvme0", 00:20:10.340 "trtype": "tcp", 00:20:10.340 "traddr": "10.0.0.2", 00:20:10.340 "adrfam": "ipv4", 00:20:10.340 "trsvcid": "4420", 00:20:10.340 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:10.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:10.340 "prchk_reftag": false, 00:20:10.340 "prchk_guard": false, 00:20:10.340 "hdgst": false, 00:20:10.340 "ddgst": false, 00:20:10.340 "dhchap_key": "key3", 00:20:10.340 "allow_unrecognized_csi": false, 00:20:10.340 "method": "bdev_nvme_attach_controller", 00:20:10.340 "req_id": 1 00:20:10.340 } 00:20:10.340 Got JSON-RPC error response 00:20:10.340 response: 00:20:10.340 { 00:20:10.340 "code": -5, 00:20:10.340 "message": "Input/output error" 00:20:10.340 } 00:20:10.340 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:10.340 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:10.340 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:10.340 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:10.340 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:10.340 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:10.340 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:10.340 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:10.601 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:10.601 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:10.601 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:10.601 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:10.601 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.601 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:10.601 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.601 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:10.601 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.601 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.601 request: 00:20:10.601 { 00:20:10.601 "name": "nvme0", 00:20:10.602 "trtype": "tcp", 00:20:10.602 "traddr": "10.0.0.2", 00:20:10.602 "adrfam": "ipv4", 00:20:10.602 "trsvcid": "4420", 00:20:10.602 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:10.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:10.602 "prchk_reftag": false, 00:20:10.602 "prchk_guard": false, 00:20:10.602 "hdgst": false, 00:20:10.602 "ddgst": false, 00:20:10.602 "dhchap_key": "key3", 00:20:10.602 "allow_unrecognized_csi": false, 00:20:10.602 "method": "bdev_nvme_attach_controller", 00:20:10.602 "req_id": 1 00:20:10.602 } 00:20:10.602 Got JSON-RPC error response 00:20:10.602 response: 00:20:10.602 { 00:20:10.602 "code": -5, 00:20:10.602 "message": "Input/output error" 00:20:10.602 } 00:20:10.602 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:10.602 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:10.602 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:10.602 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:10.602 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:10.602 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:10.602 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:10.602 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:10.602 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:10.602 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.863 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:11.124 request: 00:20:11.124 { 00:20:11.124 "name": "nvme0", 00:20:11.124 "trtype": "tcp", 00:20:11.124 "traddr": "10.0.0.2", 00:20:11.124 "adrfam": "ipv4", 00:20:11.124 "trsvcid": "4420", 00:20:11.124 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:11.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:11.124 "prchk_reftag": false, 00:20:11.124 "prchk_guard": false, 00:20:11.124 "hdgst": false, 00:20:11.124 "ddgst": false, 00:20:11.124 "dhchap_key": "key0", 00:20:11.124 "dhchap_ctrlr_key": "key1", 00:20:11.124 "allow_unrecognized_csi": false, 00:20:11.124 "method": "bdev_nvme_attach_controller", 00:20:11.124 "req_id": 1 00:20:11.124 } 00:20:11.124 Got JSON-RPC error response 00:20:11.124 response: 00:20:11.124 { 00:20:11.124 "code": -5, 00:20:11.124 "message": "Input/output error" 00:20:11.124 } 00:20:11.124 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:11.124 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:11.124 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:11.124 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:11.124 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:11.124 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:11.124 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:11.385 nvme0n1 00:20:11.385 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:11.385 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:11.385 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.646 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.646 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.646 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.907 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:11.907 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.907 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.907 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.907 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:11.907 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:11.907 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:12.478 nvme0n1 00:20:12.478 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:12.478 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:12.478 09:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.744 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.744 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:12.744 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.744 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.744 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.744 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:12.744 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:12.744 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.032 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.032 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:20:13.032 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: --dhchap-ctrl-secret DHHC-1:03:MjViMzQ3NjQxMTM3NDM1MjgzODMxODFmZTNmMmQzZGMwODAyYzdkMjY3ZWQ0ZmQ0NjhlZGYxZDY0YWM0Nzc5OPDgcH4=: 00:20:13.627 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:13.627 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:13.627 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:13.627 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:13.627 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:13.627 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:13.627 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:13.627 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.627 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.896 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:13.896 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:13.896 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:13.896 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:13.896 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.896 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:13.896 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.896 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:13.897 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:13.897 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:14.160 request: 00:20:14.160 { 00:20:14.160 "name": "nvme0", 00:20:14.160 "trtype": "tcp", 00:20:14.160 "traddr": "10.0.0.2", 00:20:14.160 "adrfam": "ipv4", 00:20:14.160 "trsvcid": "4420", 00:20:14.160 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:14.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:14.160 "prchk_reftag": false, 00:20:14.160 "prchk_guard": false, 00:20:14.160 "hdgst": false, 00:20:14.160 "ddgst": false, 00:20:14.160 "dhchap_key": "key1", 00:20:14.160 "allow_unrecognized_csi": false, 00:20:14.160 "method": "bdev_nvme_attach_controller", 00:20:14.160 "req_id": 1 00:20:14.160 } 00:20:14.160 Got JSON-RPC error response 00:20:14.160 response: 00:20:14.160 { 00:20:14.160 "code": -5, 00:20:14.160 "message": "Input/output error" 00:20:14.160 } 00:20:14.160 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:14.160 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:14.160 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:14.160 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:14.160 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:14.160 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:14.160 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:15.101 nvme0n1 00:20:15.101 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:15.101 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:15.101 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.101 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.101 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.101 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.363 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:15.363 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.363 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.363 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.363 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:15.363 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:15.363 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:15.623 nvme0n1 00:20:15.623 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:15.623 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:15.623 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: '' 2s 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: ]] 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGJkN2I3MmYyNDllNDIzYjQ4NmMzYzcxNjEwZDk0MDgKZWII: 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:15.884 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: 2s 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:18.426 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:18.427 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:18.427 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: 00:20:18.427 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:18.427 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:18.427 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:18.427 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: ]] 00:20:18.427 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MmM5MWM0MGM5ZjM2M2RkMjBjYTBmZGY5YTNkNjcyMjcxOTAwOWUxMjdiOWVmYmYwP7jFYw==: 00:20:18.427 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:18.427 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:20.340 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:20.911 nvme0n1 00:20:20.911 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:20.911 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.911 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.911 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.911 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:20.911 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:21.483 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:21.483 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:21.483 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.483 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.483 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:21.483 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.483 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.483 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.483 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:21.483 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:21.743 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:21.743 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:21.743 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:22.003 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:22.264 request: 00:20:22.264 { 00:20:22.264 "name": "nvme0", 00:20:22.264 "dhchap_key": "key1", 00:20:22.264 "dhchap_ctrlr_key": "key3", 00:20:22.264 "method": "bdev_nvme_set_keys", 00:20:22.264 "req_id": 1 00:20:22.264 } 00:20:22.264 Got JSON-RPC error response 00:20:22.264 response: 00:20:22.264 { 00:20:22.264 "code": -13, 00:20:22.264 "message": "Permission denied" 00:20:22.264 } 00:20:22.264 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:22.264 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:22.264 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:22.264 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:22.264 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:22.264 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:22.264 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.526 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:22.526 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:23.515 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:23.515 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:23.515 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.775 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:23.775 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:23.775 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.775 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.775 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.775 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:23.775 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:23.775 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:24.717 nvme0n1 00:20:24.717 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:24.717 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.717 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.717 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.717 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:24.717 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:24.717 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:24.717 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:24.717 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.717 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:24.717 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.717 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:24.717 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:24.979 request: 00:20:24.979 { 00:20:24.979 "name": "nvme0", 00:20:24.979 "dhchap_key": "key2", 00:20:24.979 "dhchap_ctrlr_key": "key0", 00:20:24.979 "method": "bdev_nvme_set_keys", 00:20:24.979 "req_id": 1 00:20:24.979 } 00:20:24.979 Got JSON-RPC error response 00:20:24.979 response: 00:20:24.979 { 00:20:24.979 "code": -13, 00:20:24.979 "message": "Permission denied" 00:20:24.979 } 00:20:24.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:24.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:24.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:24.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:24.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:24.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:24.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.239 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:25.239 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:26.180 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:26.180 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:26.180 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3852862 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3852862 ']' 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3852862 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3852862 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3852862' 00:20:26.442 killing process with pid 3852862 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3852862 00:20:26.442 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3852862 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:26.704 rmmod nvme_tcp 00:20:26.704 rmmod nvme_fabrics 00:20:26.704 rmmod nvme_keyring 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3878906 ']' 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3878906 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3878906 ']' 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3878906 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.704 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3878906 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3878906' 00:20:26.704 killing process with pid 3878906 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3878906 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3878906 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.704 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.249 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:29.249 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.rzE /tmp/spdk.key-sha256.8Xl /tmp/spdk.key-sha384.g31 /tmp/spdk.key-sha512.bxt /tmp/spdk.key-sha512.ewz /tmp/spdk.key-sha384.iwW /tmp/spdk.key-sha256.QU9 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:29.249 00:20:29.249 real 2m37.288s 00:20:29.249 user 5m53.735s 00:20:29.249 sys 0m24.974s 00:20:29.249 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.249 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.249 ************************************ 00:20:29.249 END TEST nvmf_auth_target 00:20:29.250 ************************************ 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:29.250 ************************************ 00:20:29.250 START TEST nvmf_bdevio_no_huge 00:20:29.250 ************************************ 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:29.250 * Looking for test storage... 00:20:29.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:29.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.250 --rc genhtml_branch_coverage=1 00:20:29.250 --rc genhtml_function_coverage=1 00:20:29.250 --rc genhtml_legend=1 00:20:29.250 --rc geninfo_all_blocks=1 00:20:29.250 --rc geninfo_unexecuted_blocks=1 00:20:29.250 00:20:29.250 ' 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:29.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.250 --rc genhtml_branch_coverage=1 00:20:29.250 --rc genhtml_function_coverage=1 00:20:29.250 --rc genhtml_legend=1 00:20:29.250 --rc geninfo_all_blocks=1 00:20:29.250 --rc geninfo_unexecuted_blocks=1 00:20:29.250 00:20:29.250 ' 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:29.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.250 --rc genhtml_branch_coverage=1 00:20:29.250 --rc genhtml_function_coverage=1 00:20:29.250 --rc genhtml_legend=1 00:20:29.250 --rc geninfo_all_blocks=1 00:20:29.250 --rc geninfo_unexecuted_blocks=1 00:20:29.250 00:20:29.250 ' 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:29.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.250 --rc genhtml_branch_coverage=1 00:20:29.250 --rc genhtml_function_coverage=1 00:20:29.250 --rc genhtml_legend=1 00:20:29.250 --rc geninfo_all_blocks=1 00:20:29.250 --rc geninfo_unexecuted_blocks=1 00:20:29.250 00:20:29.250 ' 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.250 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:29.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:29.251 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:37.394 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:37.394 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:37.394 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:37.394 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:37.395 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:37.395 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:37.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:20:37.395 00:20:37.395 --- 10.0.0.2 ping statistics --- 00:20:37.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.395 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:20:37.395 00:20:37.395 --- 10.0.0.1 ping statistics --- 00:20:37.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.395 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3887126 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3887126 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3887126 ']' 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.395 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.395 [2024-11-27 09:52:52.226254] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:20:37.395 [2024-11-27 09:52:52.226332] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:37.395 [2024-11-27 09:52:52.331937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.395 [2024-11-27 09:52:52.393152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.395 [2024-11-27 09:52:52.393209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.395 [2024-11-27 09:52:52.393217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.395 [2024-11-27 09:52:52.393224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.395 [2024-11-27 09:52:52.393231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.395 [2024-11-27 09:52:52.394736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:37.395 [2024-11-27 09:52:52.394894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:37.395 [2024-11-27 09:52:52.395052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.395 [2024-11-27 09:52:52.395052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:37.656 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.656 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:37.656 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:37.656 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.656 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.656 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.656 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:37.656 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.656 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.656 [2024-11-27 09:52:53.103717] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.656 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.656 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:37.656 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.656 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.918 Malloc0 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.918 [2024-11-27 09:52:53.157755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.918 { 00:20:37.918 "params": { 00:20:37.918 "name": "Nvme$subsystem", 00:20:37.918 "trtype": "$TEST_TRANSPORT", 00:20:37.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.918 "adrfam": "ipv4", 00:20:37.918 "trsvcid": "$NVMF_PORT", 00:20:37.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.918 "hdgst": ${hdgst:-false}, 00:20:37.918 "ddgst": ${ddgst:-false} 00:20:37.918 }, 00:20:37.918 "method": "bdev_nvme_attach_controller" 00:20:37.918 } 00:20:37.918 EOF 00:20:37.918 )") 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:37.918 09:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:37.918 "params": { 00:20:37.918 "name": "Nvme1", 00:20:37.918 "trtype": "tcp", 00:20:37.918 "traddr": "10.0.0.2", 00:20:37.918 "adrfam": "ipv4", 00:20:37.918 "trsvcid": "4420", 00:20:37.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.918 "hdgst": false, 00:20:37.918 "ddgst": false 00:20:37.918 }, 00:20:37.918 "method": "bdev_nvme_attach_controller" 00:20:37.918 }' 00:20:37.918 [2024-11-27 09:52:53.216672] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:20:37.918 [2024-11-27 09:52:53.216741] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3887416 ] 00:20:37.918 [2024-11-27 09:52:53.312059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:37.918 [2024-11-27 09:52:53.372439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.918 [2024-11-27 09:52:53.372601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.918 [2024-11-27 09:52:53.372602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.180 I/O targets: 00:20:38.180 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:38.180 00:20:38.180 00:20:38.180 CUnit - A unit testing framework for C - Version 2.1-3 00:20:38.180 http://cunit.sourceforge.net/ 00:20:38.180 00:20:38.180 00:20:38.180 Suite: bdevio tests on: Nvme1n1 00:20:38.441 Test: blockdev write read block ...passed 00:20:38.441 Test: blockdev write zeroes read block ...passed 00:20:38.441 Test: blockdev write zeroes read no split ...passed 00:20:38.441 Test: blockdev write zeroes read split ...passed 00:20:38.441 Test: blockdev write zeroes read split partial ...passed 00:20:38.441 Test: blockdev reset ...[2024-11-27 09:52:53.770169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:38.441 [2024-11-27 09:52:53.770279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1769800 (9): Bad file descriptor 00:20:38.441 [2024-11-27 09:52:53.877370] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:38.441 passed 00:20:38.442 Test: blockdev write read 8 blocks ...passed 00:20:38.703 Test: blockdev write read size > 128k ...passed 00:20:38.703 Test: blockdev write read invalid size ...passed 00:20:38.703 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:38.703 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:38.703 Test: blockdev write read max offset ...passed 00:20:38.703 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:38.703 Test: blockdev writev readv 8 blocks ...passed 00:20:38.703 Test: blockdev writev readv 30 x 1block ...passed 00:20:38.703 Test: blockdev writev readv block ...passed 00:20:38.703 Test: blockdev writev readv size > 128k ...passed 00:20:38.703 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:38.703 Test: blockdev comparev and writev ...[2024-11-27 09:52:54.098658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.703 [2024-11-27 09:52:54.098706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:38.703 [2024-11-27 09:52:54.098723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.703 [2024-11-27 09:52:54.098732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.703 [2024-11-27 09:52:54.099190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.703 [2024-11-27 09:52:54.099204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:38.703 [2024-11-27 09:52:54.099219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.703 [2024-11-27 09:52:54.099227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:38.703 [2024-11-27 09:52:54.099678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.703 [2024-11-27 09:52:54.099694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:38.703 [2024-11-27 09:52:54.099709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.703 [2024-11-27 09:52:54.099717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:38.703 [2024-11-27 09:52:54.100154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.703 [2024-11-27 09:52:54.100176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:38.703 [2024-11-27 09:52:54.100190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.703 [2024-11-27 09:52:54.100198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:38.703 passed 00:20:38.965 Test: blockdev nvme passthru rw ...passed 00:20:38.965 Test: blockdev nvme passthru vendor specific ...[2024-11-27 09:52:54.185737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.965 [2024-11-27 09:52:54.185758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:38.965 [2024-11-27 09:52:54.186012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.965 [2024-11-27 09:52:54.186025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:38.965 [2024-11-27 09:52:54.186296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.965 [2024-11-27 09:52:54.186309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:38.965 [2024-11-27 09:52:54.186533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.965 [2024-11-27 09:52:54.186546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:38.965 passed 00:20:38.965 Test: blockdev nvme admin passthru ...passed 00:20:38.965 Test: blockdev copy ...passed 00:20:38.965 00:20:38.965 Run Summary: Type Total Ran Passed Failed Inactive 00:20:38.965 suites 1 1 n/a 0 0 00:20:38.965 tests 23 23 23 0 0 00:20:38.965 asserts 152 152 152 0 n/a 00:20:38.965 00:20:38.965 Elapsed time = 1.217 seconds 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.226 rmmod nvme_tcp 00:20:39.226 rmmod nvme_fabrics 00:20:39.226 rmmod nvme_keyring 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3887126 ']' 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3887126 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3887126 ']' 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3887126 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3887126 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3887126' 00:20:39.226 killing process with pid 3887126 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3887126 00:20:39.226 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3887126 00:20:39.796 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:39.796 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:39.796 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:39.796 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:39.796 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:39.796 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:39.796 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:39.796 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.796 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:39.796 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.796 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.796 09:52:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.712 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:41.712 00:20:41.712 real 0m12.747s 00:20:41.712 user 0m14.594s 00:20:41.712 sys 0m6.885s 00:20:41.712 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.712 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.712 ************************************ 00:20:41.712 END TEST nvmf_bdevio_no_huge 00:20:41.712 ************************************ 00:20:41.712 09:52:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:41.712 09:52:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:41.712 09:52:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.712 09:52:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:41.712 ************************************ 00:20:41.712 START TEST nvmf_tls 00:20:41.712 ************************************ 00:20:41.712 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:41.974 * Looking for test storage... 00:20:41.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.974 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:41.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.975 --rc genhtml_branch_coverage=1 00:20:41.975 --rc genhtml_function_coverage=1 00:20:41.975 --rc genhtml_legend=1 00:20:41.975 --rc geninfo_all_blocks=1 00:20:41.975 --rc geninfo_unexecuted_blocks=1 00:20:41.975 00:20:41.975 ' 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:41.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.975 --rc genhtml_branch_coverage=1 00:20:41.975 --rc genhtml_function_coverage=1 00:20:41.975 --rc genhtml_legend=1 00:20:41.975 --rc geninfo_all_blocks=1 00:20:41.975 --rc geninfo_unexecuted_blocks=1 00:20:41.975 00:20:41.975 ' 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:41.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.975 --rc genhtml_branch_coverage=1 00:20:41.975 --rc genhtml_function_coverage=1 00:20:41.975 --rc genhtml_legend=1 00:20:41.975 --rc geninfo_all_blocks=1 00:20:41.975 --rc geninfo_unexecuted_blocks=1 00:20:41.975 00:20:41.975 ' 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:41.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.975 --rc genhtml_branch_coverage=1 00:20:41.975 --rc genhtml_function_coverage=1 00:20:41.975 --rc genhtml_legend=1 00:20:41.975 --rc geninfo_all_blocks=1 00:20:41.975 --rc geninfo_unexecuted_blocks=1 00:20:41.975 00:20:41.975 ' 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:41.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:41.975 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:50.123 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:50.123 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:50.123 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:50.123 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:50.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:20:50.123 00:20:50.123 --- 10.0.0.2 ping statistics --- 00:20:50.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.123 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:20:50.123 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:20:50.123 00:20:50.123 --- 10.0.0.1 ping statistics --- 00:20:50.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.123 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3891865 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3891865 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3891865 ']' 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.124 [2024-11-27 09:53:04.959139] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:20:50.124 [2024-11-27 09:53:04.959215] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.124 [2024-11-27 09:53:05.058758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.124 [2024-11-27 09:53:05.109285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.124 [2024-11-27 09:53:05.109337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.124 [2024-11-27 09:53:05.109345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.124 [2024-11-27 09:53:05.109352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.124 [2024-11-27 09:53:05.109358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.124 [2024-11-27 09:53:05.110118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.385 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.385 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:50.385 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:50.385 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:50.385 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.385 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.385 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:50.385 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:50.646 true 00:20:50.646 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:50.646 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:50.907 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:50.907 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:50.907 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:51.168 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.168 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:51.168 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:51.168 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:51.168 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:51.430 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.430 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:51.691 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:51.691 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:51.691 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.691 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:51.691 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:51.691 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:51.691 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:51.950 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.950 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:52.209 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:52.209 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:52.209 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:52.209 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:52.209 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:52.467 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:52.726 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:52.726 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:52.726 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.TYXjzlweq8 00:20:52.726 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:52.726 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.pSqqic9vX0 00:20:52.726 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:52.726 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:52.726 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TYXjzlweq8 00:20:52.726 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.pSqqic9vX0 00:20:52.726 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:52.727 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:52.986 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.TYXjzlweq8 00:20:52.986 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TYXjzlweq8 00:20:52.986 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:53.300 [2024-11-27 09:53:08.597716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.300 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:53.560 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:53.560 [2024-11-27 09:53:08.950540] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.560 [2024-11-27 09:53:08.950749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.560 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:53.818 malloc0 00:20:53.818 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:54.077 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TYXjzlweq8 00:20:54.077 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:54.336 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TYXjzlweq8 00:21:04.334 Initializing NVMe Controllers 00:21:04.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:04.334 Initialization complete. Launching workers. 00:21:04.334 ======================================================== 00:21:04.334 Latency(us) 00:21:04.334 Device Information : IOPS MiB/s Average min max 00:21:04.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18578.65 72.57 3444.99 1168.03 4480.43 00:21:04.334 ======================================================== 00:21:04.334 Total : 18578.65 72.57 3444.99 1168.03 4480.43 00:21:04.334 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TYXjzlweq8 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TYXjzlweq8 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3894816 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3894816 /var/tmp/bdevperf.sock 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3894816 ']' 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.334 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.334 [2024-11-27 09:53:19.792239] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:04.334 [2024-11-27 09:53:19.792299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894816 ] 00:21:04.596 [2024-11-27 09:53:19.878995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.596 [2024-11-27 09:53:19.914791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.170 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.170 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:05.170 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TYXjzlweq8 00:21:05.432 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:05.693 [2024-11-27 09:53:20.910284] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.693 TLSTESTn1 00:21:05.693 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:05.693 Running I/O for 10 seconds... 00:21:08.015 3997.00 IOPS, 15.61 MiB/s [2024-11-27T08:53:24.421Z] 4566.00 IOPS, 17.84 MiB/s [2024-11-27T08:53:25.362Z] 4929.67 IOPS, 19.26 MiB/s [2024-11-27T08:53:26.302Z] 5049.50 IOPS, 19.72 MiB/s [2024-11-27T08:53:27.243Z] 5117.80 IOPS, 19.99 MiB/s [2024-11-27T08:53:28.191Z] 5213.83 IOPS, 20.37 MiB/s [2024-11-27T08:53:29.261Z] 5393.86 IOPS, 21.07 MiB/s [2024-11-27T08:53:30.202Z] 5425.88 IOPS, 21.19 MiB/s [2024-11-27T08:53:31.144Z] 5371.67 IOPS, 20.98 MiB/s [2024-11-27T08:53:31.144Z] 5447.80 IOPS, 21.28 MiB/s 00:21:15.678 Latency(us) 00:21:15.678 [2024-11-27T08:53:31.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.678 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:15.678 Verification LBA range: start 0x0 length 0x2000 00:21:15.678 TLSTESTn1 : 10.02 5450.47 21.29 0.00 0.00 23442.99 5789.01 31020.37 00:21:15.678 [2024-11-27T08:53:31.144Z] =================================================================================================================== 00:21:15.678 [2024-11-27T08:53:31.144Z] Total : 5450.47 21.29 0.00 0.00 23442.99 5789.01 31020.37 00:21:15.678 { 00:21:15.678 "results": [ 00:21:15.678 { 00:21:15.678 "job": "TLSTESTn1", 00:21:15.678 "core_mask": "0x4", 00:21:15.678 "workload": "verify", 00:21:15.678 "status": "finished", 00:21:15.678 "verify_range": { 00:21:15.678 "start": 0, 00:21:15.678 "length": 8192 00:21:15.678 }, 00:21:15.678 "queue_depth": 128, 00:21:15.678 "io_size": 4096, 00:21:15.678 "runtime": 10.018227, 00:21:15.678 "iops": 5450.46543664862, 00:21:15.678 "mibps": 21.290880611908673, 00:21:15.678 "io_failed": 0, 00:21:15.678 "io_timeout": 0, 00:21:15.678 "avg_latency_us": 23442.994371108343, 00:21:15.678 "min_latency_us": 5789.013333333333, 00:21:15.678 "max_latency_us": 31020.373333333333 00:21:15.678 } 00:21:15.678 ], 00:21:15.678 "core_count": 1 00:21:15.678 } 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3894816 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3894816 ']' 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3894816 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3894816 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3894816' 00:21:15.940 killing process with pid 3894816 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3894816 00:21:15.940 Received shutdown signal, test time was about 10.000000 seconds 00:21:15.940 00:21:15.940 Latency(us) 00:21:15.940 [2024-11-27T08:53:31.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.940 [2024-11-27T08:53:31.406Z] =================================================================================================================== 00:21:15.940 [2024-11-27T08:53:31.406Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3894816 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pSqqic9vX0 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pSqqic9vX0 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pSqqic9vX0 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pSqqic9vX0 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3897048 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3897048 /var/tmp/bdevperf.sock 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:15.940 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3897048 ']' 00:21:15.941 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.941 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.941 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.941 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.941 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.941 [2024-11-27 09:53:31.382143] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:15.941 [2024-11-27 09:53:31.382205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897048 ] 00:21:16.201 [2024-11-27 09:53:31.463709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.201 [2024-11-27 09:53:31.492599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.773 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.773 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:16.773 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pSqqic9vX0 00:21:17.033 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:17.294 [2024-11-27 09:53:32.511419] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.294 [2024-11-27 09:53:32.515957] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:17.294 [2024-11-27 09:53:32.516587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2014bd0 (107): Transport endpoint is not connected 00:21:17.294 [2024-11-27 09:53:32.517581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2014bd0 (9): Bad file descriptor 00:21:17.294 [2024-11-27 09:53:32.518583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:17.294 [2024-11-27 09:53:32.518590] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:17.294 [2024-11-27 09:53:32.518596] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:17.294 [2024-11-27 09:53:32.518604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:17.294 request: 00:21:17.294 { 00:21:17.294 "name": "TLSTEST", 00:21:17.294 "trtype": "tcp", 00:21:17.294 "traddr": "10.0.0.2", 00:21:17.294 "adrfam": "ipv4", 00:21:17.294 "trsvcid": "4420", 00:21:17.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.294 "prchk_reftag": false, 00:21:17.294 "prchk_guard": false, 00:21:17.294 "hdgst": false, 00:21:17.294 "ddgst": false, 00:21:17.294 "psk": "key0", 00:21:17.294 "allow_unrecognized_csi": false, 00:21:17.294 "method": "bdev_nvme_attach_controller", 00:21:17.294 "req_id": 1 00:21:17.294 } 00:21:17.294 Got JSON-RPC error response 00:21:17.294 response: 00:21:17.294 { 00:21:17.294 "code": -5, 00:21:17.294 "message": "Input/output error" 00:21:17.294 } 00:21:17.294 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3897048 00:21:17.294 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3897048 ']' 00:21:17.294 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3897048 00:21:17.294 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:17.294 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.294 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3897048 00:21:17.294 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:17.294 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:17.294 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3897048' 00:21:17.294 killing process with pid 3897048 00:21:17.294 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3897048 00:21:17.294 Received shutdown signal, test time was about 10.000000 seconds 00:21:17.294 00:21:17.294 Latency(us) 00:21:17.294 [2024-11-27T08:53:32.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.294 [2024-11-27T08:53:32.760Z] =================================================================================================================== 00:21:17.294 [2024-11-27T08:53:32.760Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3897048 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TYXjzlweq8 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TYXjzlweq8 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TYXjzlweq8 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TYXjzlweq8 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3897218 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3897218 /var/tmp/bdevperf.sock 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3897218 ']' 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.295 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.295 [2024-11-27 09:53:32.758416] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:17.295 [2024-11-27 09:53:32.758471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897218 ] 00:21:17.556 [2024-11-27 09:53:32.841162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.556 [2024-11-27 09:53:32.869596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.127 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.127 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:18.127 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TYXjzlweq8 00:21:18.387 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:18.648 [2024-11-27 09:53:33.884405] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.648 [2024-11-27 09:53:33.888864] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:18.648 [2024-11-27 09:53:33.888884] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:18.648 [2024-11-27 09:53:33.888903] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:18.648 [2024-11-27 09:53:33.889557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba3bd0 (107): Transport endpoint is not connected 00:21:18.648 [2024-11-27 09:53:33.890552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba3bd0 (9): Bad file descriptor 00:21:18.648 [2024-11-27 09:53:33.891554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:18.648 [2024-11-27 09:53:33.891561] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:18.648 [2024-11-27 09:53:33.891568] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:18.648 [2024-11-27 09:53:33.891576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:18.648 request: 00:21:18.648 { 00:21:18.648 "name": "TLSTEST", 00:21:18.648 "trtype": "tcp", 00:21:18.648 "traddr": "10.0.0.2", 00:21:18.648 "adrfam": "ipv4", 00:21:18.648 "trsvcid": "4420", 00:21:18.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.648 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:18.648 "prchk_reftag": false, 00:21:18.648 "prchk_guard": false, 00:21:18.648 "hdgst": false, 00:21:18.648 "ddgst": false, 00:21:18.648 "psk": "key0", 00:21:18.648 "allow_unrecognized_csi": false, 00:21:18.648 "method": "bdev_nvme_attach_controller", 00:21:18.648 "req_id": 1 00:21:18.648 } 00:21:18.648 Got JSON-RPC error response 00:21:18.648 response: 00:21:18.648 { 00:21:18.648 "code": -5, 00:21:18.648 "message": "Input/output error" 00:21:18.648 } 00:21:18.648 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3897218 00:21:18.648 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3897218 ']' 00:21:18.648 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3897218 00:21:18.648 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:18.648 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.648 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3897218 00:21:18.648 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:18.648 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:18.648 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3897218' 00:21:18.648 killing process with pid 3897218 00:21:18.648 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3897218 00:21:18.648 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.648 00:21:18.648 Latency(us) 00:21:18.648 [2024-11-27T08:53:34.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.648 [2024-11-27T08:53:34.114Z] =================================================================================================================== 00:21:18.648 [2024-11-27T08:53:34.114Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:18.648 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3897218 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TYXjzlweq8 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TYXjzlweq8 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TYXjzlweq8 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TYXjzlweq8 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3897530 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.648 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3897530 /var/tmp/bdevperf.sock 00:21:18.649 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:18.649 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3897530 ']' 00:21:18.649 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.649 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.649 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.649 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.649 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.910 [2024-11-27 09:53:34.132950] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:18.910 [2024-11-27 09:53:34.133005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897530 ] 00:21:18.910 [2024-11-27 09:53:34.215887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.910 [2024-11-27 09:53:34.244151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.482 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.482 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:19.482 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TYXjzlweq8 00:21:19.743 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:20.004 [2024-11-27 09:53:35.238730] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.004 [2024-11-27 09:53:35.250242] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:20.004 [2024-11-27 09:53:35.250260] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:20.004 [2024-11-27 09:53:35.250280] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:20.004 [2024-11-27 09:53:35.250968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeefbd0 (107): Transport endpoint is not connected 00:21:20.004 [2024-11-27 09:53:35.251964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeefbd0 (9): Bad file descriptor 00:21:20.004 [2024-11-27 09:53:35.252966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:20.004 [2024-11-27 09:53:35.252974] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:20.004 [2024-11-27 09:53:35.252980] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:20.004 [2024-11-27 09:53:35.252991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:20.004 request: 00:21:20.004 { 00:21:20.004 "name": "TLSTEST", 00:21:20.004 "trtype": "tcp", 00:21:20.004 "traddr": "10.0.0.2", 00:21:20.004 "adrfam": "ipv4", 00:21:20.004 "trsvcid": "4420", 00:21:20.004 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:20.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.004 "prchk_reftag": false, 00:21:20.004 "prchk_guard": false, 00:21:20.004 "hdgst": false, 00:21:20.004 "ddgst": false, 00:21:20.004 "psk": "key0", 00:21:20.004 "allow_unrecognized_csi": false, 00:21:20.004 "method": "bdev_nvme_attach_controller", 00:21:20.004 "req_id": 1 00:21:20.004 } 00:21:20.004 Got JSON-RPC error response 00:21:20.004 response: 00:21:20.004 { 00:21:20.004 "code": -5, 00:21:20.004 "message": "Input/output error" 00:21:20.004 } 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3897530 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3897530 ']' 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3897530 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3897530 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3897530' 00:21:20.005 killing process with pid 3897530 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3897530 00:21:20.005 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.005 00:21:20.005 Latency(us) 00:21:20.005 [2024-11-27T08:53:35.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.005 [2024-11-27T08:53:35.471Z] =================================================================================================================== 00:21:20.005 [2024-11-27T08:53:35.471Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3897530 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3897872 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3897872 /var/tmp/bdevperf.sock 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3897872 ']' 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.005 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.266 [2024-11-27 09:53:35.484637] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:20.266 [2024-11-27 09:53:35.484699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897872 ] 00:21:20.266 [2024-11-27 09:53:35.569148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.266 [2024-11-27 09:53:35.597110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.838 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.838 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:20.838 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:21.099 [2024-11-27 09:53:36.435073] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:21.099 [2024-11-27 09:53:36.435097] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:21.099 request: 00:21:21.099 { 00:21:21.099 "name": "key0", 00:21:21.099 "path": "", 00:21:21.099 "method": "keyring_file_add_key", 00:21:21.099 "req_id": 1 00:21:21.099 } 00:21:21.099 Got JSON-RPC error response 00:21:21.099 response: 00:21:21.099 { 00:21:21.099 "code": -1, 00:21:21.099 "message": "Operation not permitted" 00:21:21.099 } 00:21:21.099 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:21.360 [2024-11-27 09:53:36.619611] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.360 [2024-11-27 09:53:36.619639] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:21.360 request: 00:21:21.360 { 00:21:21.360 "name": "TLSTEST", 00:21:21.360 "trtype": "tcp", 00:21:21.360 "traddr": "10.0.0.2", 00:21:21.360 "adrfam": "ipv4", 00:21:21.360 "trsvcid": "4420", 00:21:21.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.360 "prchk_reftag": false, 00:21:21.360 "prchk_guard": false, 00:21:21.360 "hdgst": false, 00:21:21.360 "ddgst": false, 00:21:21.360 "psk": "key0", 00:21:21.360 "allow_unrecognized_csi": false, 00:21:21.360 "method": "bdev_nvme_attach_controller", 00:21:21.360 "req_id": 1 00:21:21.360 } 00:21:21.360 Got JSON-RPC error response 00:21:21.360 response: 00:21:21.360 { 00:21:21.360 "code": -126, 00:21:21.360 "message": "Required key not available" 00:21:21.360 } 00:21:21.360 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3897872 00:21:21.360 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3897872 ']' 00:21:21.360 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3897872 00:21:21.360 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:21.360 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.360 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3897872 00:21:21.360 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:21.360 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:21.360 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3897872' 00:21:21.360 killing process with pid 3897872 00:21:21.360 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3897872 00:21:21.360 Received shutdown signal, test time was about 10.000000 seconds 00:21:21.360 00:21:21.360 Latency(us) 00:21:21.360 [2024-11-27T08:53:36.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.360 [2024-11-27T08:53:36.826Z] =================================================================================================================== 00:21:21.360 [2024-11-27T08:53:36.827Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:21.361 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3897872 00:21:21.361 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:21.361 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:21.361 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:21.361 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:21.361 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:21.361 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3891865 00:21:21.361 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3891865 ']' 00:21:21.361 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3891865 00:21:21.361 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:21.361 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.361 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3891865 00:21:21.623 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:21.623 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:21.623 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3891865' 00:21:21.623 killing process with pid 3891865 00:21:21.623 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3891865 00:21:21.623 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3891865 00:21:21.623 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:21.623 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:21.623 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:21.623 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:21.623 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:21.623 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:21:21.623 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.qiYKqJAW4a 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.qiYKqJAW4a 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3898229 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3898229 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3898229 ']' 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.623 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.885 [2024-11-27 09:53:37.095939] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:21.885 [2024-11-27 09:53:37.095994] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.885 [2024-11-27 09:53:37.186249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.885 [2024-11-27 09:53:37.214156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.885 [2024-11-27 09:53:37.214193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.885 [2024-11-27 09:53:37.214198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.885 [2024-11-27 09:53:37.214204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.885 [2024-11-27 09:53:37.214208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.885 [2024-11-27 09:53:37.214663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.456 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.456 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:22.456 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:22.456 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:22.456 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.717 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.717 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.qiYKqJAW4a 00:21:22.717 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qiYKqJAW4a 00:21:22.717 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:22.717 [2024-11-27 09:53:38.097569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.717 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:22.978 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:23.238 [2024-11-27 09:53:38.466486] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:23.238 [2024-11-27 09:53:38.466689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.238 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:23.238 malloc0 00:21:23.238 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:23.499 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qiYKqJAW4a 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qiYKqJAW4a 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qiYKqJAW4a 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3898596 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3898596 /var/tmp/bdevperf.sock 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3898596 ']' 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.760 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.021 [2024-11-27 09:53:39.273088] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:24.021 [2024-11-27 09:53:39.273142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898596 ] 00:21:24.021 [2024-11-27 09:53:39.338147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.021 [2024-11-27 09:53:39.366589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.021 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.021 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:24.021 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qiYKqJAW4a 00:21:24.283 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:24.543 [2024-11-27 09:53:39.775826] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.543 TLSTESTn1 00:21:24.543 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:24.543 Running I/O for 10 seconds... 00:21:26.504 6224.00 IOPS, 24.31 MiB/s [2024-11-27T08:53:43.354Z] 5744.00 IOPS, 22.44 MiB/s [2024-11-27T08:53:44.298Z] 5687.33 IOPS, 22.22 MiB/s [2024-11-27T08:53:45.242Z] 5834.50 IOPS, 22.79 MiB/s [2024-11-27T08:53:46.184Z] 5914.20 IOPS, 23.10 MiB/s [2024-11-27T08:53:47.128Z] 5977.83 IOPS, 23.35 MiB/s [2024-11-27T08:53:48.069Z] 6024.43 IOPS, 23.53 MiB/s [2024-11-27T08:53:49.013Z] 6086.25 IOPS, 23.77 MiB/s [2024-11-27T08:53:50.395Z] 6130.22 IOPS, 23.95 MiB/s [2024-11-27T08:53:50.395Z] 6102.30 IOPS, 23.84 MiB/s 00:21:34.929 Latency(us) 00:21:34.929 [2024-11-27T08:53:50.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.929 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:34.929 Verification LBA range: start 0x0 length 0x2000 00:21:34.929 TLSTESTn1 : 10.02 6105.15 23.85 0.00 0.00 20934.02 6717.44 23374.51 00:21:34.929 [2024-11-27T08:53:50.396Z] =================================================================================================================== 00:21:34.930 [2024-11-27T08:53:50.396Z] Total : 6105.15 23.85 0.00 0.00 20934.02 6717.44 23374.51 00:21:34.930 { 00:21:34.930 "results": [ 00:21:34.930 { 00:21:34.930 "job": "TLSTESTn1", 00:21:34.930 "core_mask": "0x4", 00:21:34.930 "workload": "verify", 00:21:34.930 "status": "finished", 00:21:34.930 "verify_range": { 00:21:34.930 "start": 0, 00:21:34.930 "length": 8192 00:21:34.930 }, 00:21:34.930 "queue_depth": 128, 00:21:34.930 "io_size": 4096, 00:21:34.930 "runtime": 10.015814, 00:21:34.930 "iops": 6105.145323185914, 00:21:34.930 "mibps": 23.848223918694977, 00:21:34.930 "io_failed": 0, 00:21:34.930 "io_timeout": 0, 00:21:34.930 "avg_latency_us": 20934.0155338959, 00:21:34.930 "min_latency_us": 6717.44, 00:21:34.930 "max_latency_us": 23374.506666666668 00:21:34.930 } 00:21:34.930 ], 00:21:34.930 "core_count": 1 00:21:34.930 } 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3898596 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3898596 ']' 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3898596 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3898596 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3898596' 00:21:34.930 killing process with pid 3898596 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3898596 00:21:34.930 Received shutdown signal, test time was about 10.000000 seconds 00:21:34.930 00:21:34.930 Latency(us) 00:21:34.930 [2024-11-27T08:53:50.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.930 [2024-11-27T08:53:50.396Z] =================================================================================================================== 00:21:34.930 [2024-11-27T08:53:50.396Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3898596 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.qiYKqJAW4a 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qiYKqJAW4a 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qiYKqJAW4a 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qiYKqJAW4a 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qiYKqJAW4a 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3900675 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3900675 /var/tmp/bdevperf.sock 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3900675 ']' 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.930 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.930 [2024-11-27 09:53:50.249255] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:34.930 [2024-11-27 09:53:50.249313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900675 ] 00:21:34.930 [2024-11-27 09:53:50.334304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.930 [2024-11-27 09:53:50.362780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.873 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.873 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:35.873 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qiYKqJAW4a 00:21:35.873 [2024-11-27 09:53:51.204982] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qiYKqJAW4a': 0100666 00:21:35.873 [2024-11-27 09:53:51.205008] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:35.873 request: 00:21:35.873 { 00:21:35.873 "name": "key0", 00:21:35.873 "path": "/tmp/tmp.qiYKqJAW4a", 00:21:35.873 "method": "keyring_file_add_key", 00:21:35.873 "req_id": 1 00:21:35.873 } 00:21:35.873 Got JSON-RPC error response 00:21:35.873 response: 00:21:35.873 { 00:21:35.873 "code": -1, 00:21:35.873 "message": "Operation not permitted" 00:21:35.873 } 00:21:35.873 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:36.134 [2024-11-27 09:53:51.385509] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.134 [2024-11-27 09:53:51.385534] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:36.134 request: 00:21:36.134 { 00:21:36.134 "name": "TLSTEST", 00:21:36.134 "trtype": "tcp", 00:21:36.134 "traddr": "10.0.0.2", 00:21:36.134 "adrfam": "ipv4", 00:21:36.134 "trsvcid": "4420", 00:21:36.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.134 "prchk_reftag": false, 00:21:36.134 "prchk_guard": false, 00:21:36.134 "hdgst": false, 00:21:36.134 "ddgst": false, 00:21:36.134 "psk": "key0", 00:21:36.134 "allow_unrecognized_csi": false, 00:21:36.134 "method": "bdev_nvme_attach_controller", 00:21:36.134 "req_id": 1 00:21:36.134 } 00:21:36.134 Got JSON-RPC error response 00:21:36.134 response: 00:21:36.134 { 00:21:36.134 "code": -126, 00:21:36.134 "message": "Required key not available" 00:21:36.134 } 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3900675 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3900675 ']' 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3900675 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3900675 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3900675' 00:21:36.134 killing process with pid 3900675 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3900675 00:21:36.134 Received shutdown signal, test time was about 10.000000 seconds 00:21:36.134 00:21:36.134 Latency(us) 00:21:36.134 [2024-11-27T08:53:51.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.134 [2024-11-27T08:53:51.600Z] =================================================================================================================== 00:21:36.134 [2024-11-27T08:53:51.600Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3900675 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3898229 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3898229 ']' 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3898229 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.134 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3898229 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3898229' 00:21:36.395 killing process with pid 3898229 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3898229 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3898229 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3900966 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3900966 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3900966 ']' 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.395 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.395 [2024-11-27 09:53:51.808327] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:36.395 [2024-11-27 09:53:51.808381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.656 [2024-11-27 09:53:51.899316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.656 [2024-11-27 09:53:51.932087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.656 [2024-11-27 09:53:51.932121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.656 [2024-11-27 09:53:51.932127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.656 [2024-11-27 09:53:51.932132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.656 [2024-11-27 09:53:51.932136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.656 [2024-11-27 09:53:51.932620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.qiYKqJAW4a 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.qiYKqJAW4a 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.qiYKqJAW4a 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qiYKqJAW4a 00:21:37.227 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:37.487 [2024-11-27 09:53:52.810972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.487 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:37.748 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:37.748 [2024-11-27 09:53:53.163836] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:37.748 [2024-11-27 09:53:53.164037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.748 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:38.008 malloc0 00:21:38.008 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:38.270 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qiYKqJAW4a 00:21:38.270 [2024-11-27 09:53:53.690808] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qiYKqJAW4a': 0100666 00:21:38.270 [2024-11-27 09:53:53.690830] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:38.270 request: 00:21:38.270 { 00:21:38.270 "name": "key0", 00:21:38.270 "path": "/tmp/tmp.qiYKqJAW4a", 00:21:38.270 "method": "keyring_file_add_key", 00:21:38.270 "req_id": 1 00:21:38.270 } 00:21:38.270 Got JSON-RPC error response 00:21:38.270 response: 00:21:38.270 { 00:21:38.270 "code": -1, 00:21:38.270 "message": "Operation not permitted" 00:21:38.270 } 00:21:38.270 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:38.530 [2024-11-27 09:53:53.855238] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:38.530 [2024-11-27 09:53:53.855267] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:38.530 request: 00:21:38.530 { 00:21:38.530 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.530 "host": "nqn.2016-06.io.spdk:host1", 00:21:38.530 "psk": "key0", 00:21:38.530 "method": "nvmf_subsystem_add_host", 00:21:38.530 "req_id": 1 00:21:38.530 } 00:21:38.530 Got JSON-RPC error response 00:21:38.530 response: 00:21:38.530 { 00:21:38.530 "code": -32603, 00:21:38.530 "message": "Internal error" 00:21:38.530 } 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3900966 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3900966 ']' 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3900966 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3900966 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3900966' 00:21:38.530 killing process with pid 3900966 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3900966 00:21:38.530 09:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3900966 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.qiYKqJAW4a 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3901583 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3901583 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3901583 ']' 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.793 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.793 [2024-11-27 09:53:54.125684] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:38.793 [2024-11-27 09:53:54.125739] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.793 [2024-11-27 09:53:54.218372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.793 [2024-11-27 09:53:54.252483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.793 [2024-11-27 09:53:54.252519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.793 [2024-11-27 09:53:54.252525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.793 [2024-11-27 09:53:54.252534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.793 [2024-11-27 09:53:54.252538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.793 [2024-11-27 09:53:54.253033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:39.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:39.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:39.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.qiYKqJAW4a 00:21:39.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qiYKqJAW4a 00:21:39.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:39.736 [2024-11-27 09:53:55.127179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.736 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:39.997 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:40.258 [2024-11-27 09:53:55.492080] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:40.258 [2024-11-27 09:53:55.492285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.258 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:40.258 malloc0 00:21:40.258 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:40.519 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qiYKqJAW4a 00:21:40.780 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:40.780 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:40.780 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3902022 00:21:40.780 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:40.780 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3902022 /var/tmp/bdevperf.sock 00:21:40.780 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3902022 ']' 00:21:40.780 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.780 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.780 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.041 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.041 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.041 [2024-11-27 09:53:56.286262] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:41.041 [2024-11-27 09:53:56.286316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902022 ] 00:21:41.041 [2024-11-27 09:53:56.373666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.041 [2024-11-27 09:53:56.408624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.984 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.984 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:41.984 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qiYKqJAW4a 00:21:41.984 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:41.984 [2024-11-27 09:53:57.436186] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.244 TLSTESTn1 00:21:42.245 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:42.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:42.505 "subsystems": [ 00:21:42.505 { 00:21:42.505 "subsystem": "keyring", 00:21:42.505 "config": [ 00:21:42.505 { 00:21:42.505 "method": "keyring_file_add_key", 00:21:42.505 "params": { 00:21:42.505 "name": "key0", 00:21:42.505 "path": "/tmp/tmp.qiYKqJAW4a" 00:21:42.505 } 00:21:42.505 } 00:21:42.505 ] 00:21:42.505 }, 00:21:42.505 { 00:21:42.505 "subsystem": "iobuf", 00:21:42.505 "config": [ 00:21:42.505 { 00:21:42.505 "method": "iobuf_set_options", 00:21:42.505 "params": { 00:21:42.505 "small_pool_count": 8192, 00:21:42.505 "large_pool_count": 1024, 00:21:42.505 "small_bufsize": 8192, 00:21:42.505 "large_bufsize": 135168, 00:21:42.505 "enable_numa": false 00:21:42.505 } 00:21:42.505 } 00:21:42.505 ] 00:21:42.505 }, 00:21:42.505 { 00:21:42.505 "subsystem": "sock", 00:21:42.505 "config": [ 00:21:42.505 { 00:21:42.505 "method": "sock_set_default_impl", 00:21:42.505 "params": { 00:21:42.505 "impl_name": "posix" 00:21:42.505 } 00:21:42.505 }, 00:21:42.505 { 00:21:42.505 "method": "sock_impl_set_options", 00:21:42.505 "params": { 00:21:42.505 "impl_name": "ssl", 00:21:42.505 "recv_buf_size": 4096, 00:21:42.505 "send_buf_size": 4096, 00:21:42.505 "enable_recv_pipe": true, 00:21:42.505 "enable_quickack": false, 00:21:42.505 "enable_placement_id": 0, 00:21:42.505 "enable_zerocopy_send_server": true, 00:21:42.505 "enable_zerocopy_send_client": false, 00:21:42.505 "zerocopy_threshold": 0, 00:21:42.505 "tls_version": 0, 00:21:42.505 "enable_ktls": false 00:21:42.505 } 00:21:42.505 }, 00:21:42.505 { 00:21:42.505 "method": "sock_impl_set_options", 00:21:42.505 "params": { 00:21:42.505 "impl_name": "posix", 00:21:42.505 "recv_buf_size": 2097152, 00:21:42.505 "send_buf_size": 2097152, 00:21:42.505 "enable_recv_pipe": true, 00:21:42.505 "enable_quickack": false, 00:21:42.505 "enable_placement_id": 0, 00:21:42.505 "enable_zerocopy_send_server": true, 00:21:42.505 "enable_zerocopy_send_client": false, 00:21:42.505 "zerocopy_threshold": 0, 00:21:42.505 "tls_version": 0, 00:21:42.505 "enable_ktls": false 00:21:42.505 } 00:21:42.505 } 00:21:42.505 ] 00:21:42.505 }, 00:21:42.505 { 00:21:42.505 "subsystem": "vmd", 00:21:42.505 "config": [] 00:21:42.505 }, 00:21:42.505 { 00:21:42.505 "subsystem": "accel", 00:21:42.505 "config": [ 00:21:42.505 { 00:21:42.505 "method": "accel_set_options", 00:21:42.505 "params": { 00:21:42.505 "small_cache_size": 128, 00:21:42.505 "large_cache_size": 16, 00:21:42.505 "task_count": 2048, 00:21:42.505 "sequence_count": 2048, 00:21:42.505 "buf_count": 2048 00:21:42.505 } 00:21:42.505 } 00:21:42.505 ] 00:21:42.505 }, 00:21:42.505 { 00:21:42.505 "subsystem": "bdev", 00:21:42.505 "config": [ 00:21:42.505 { 00:21:42.505 "method": "bdev_set_options", 00:21:42.506 "params": { 00:21:42.506 "bdev_io_pool_size": 65535, 00:21:42.506 "bdev_io_cache_size": 256, 00:21:42.506 "bdev_auto_examine": true, 00:21:42.506 "iobuf_small_cache_size": 128, 00:21:42.506 "iobuf_large_cache_size": 16 00:21:42.506 } 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "method": "bdev_raid_set_options", 00:21:42.506 "params": { 00:21:42.506 "process_window_size_kb": 1024, 00:21:42.506 "process_max_bandwidth_mb_sec": 0 00:21:42.506 } 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "method": "bdev_iscsi_set_options", 00:21:42.506 "params": { 00:21:42.506 "timeout_sec": 30 00:21:42.506 } 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "method": "bdev_nvme_set_options", 00:21:42.506 "params": { 00:21:42.506 "action_on_timeout": "none", 00:21:42.506 "timeout_us": 0, 00:21:42.506 "timeout_admin_us": 0, 00:21:42.506 "keep_alive_timeout_ms": 10000, 00:21:42.506 "arbitration_burst": 0, 00:21:42.506 "low_priority_weight": 0, 00:21:42.506 "medium_priority_weight": 0, 00:21:42.506 "high_priority_weight": 0, 00:21:42.506 "nvme_adminq_poll_period_us": 10000, 00:21:42.506 "nvme_ioq_poll_period_us": 0, 00:21:42.506 "io_queue_requests": 0, 00:21:42.506 "delay_cmd_submit": true, 00:21:42.506 "transport_retry_count": 4, 00:21:42.506 "bdev_retry_count": 3, 00:21:42.506 "transport_ack_timeout": 0, 00:21:42.506 "ctrlr_loss_timeout_sec": 0, 00:21:42.506 "reconnect_delay_sec": 0, 00:21:42.506 "fast_io_fail_timeout_sec": 0, 00:21:42.506 "disable_auto_failback": false, 00:21:42.506 "generate_uuids": false, 00:21:42.506 "transport_tos": 0, 00:21:42.506 "nvme_error_stat": false, 00:21:42.506 "rdma_srq_size": 0, 00:21:42.506 "io_path_stat": false, 00:21:42.506 "allow_accel_sequence": false, 00:21:42.506 "rdma_max_cq_size": 0, 00:21:42.506 "rdma_cm_event_timeout_ms": 0, 00:21:42.506 "dhchap_digests": [ 00:21:42.506 "sha256", 00:21:42.506 "sha384", 00:21:42.506 "sha512" 00:21:42.506 ], 00:21:42.506 "dhchap_dhgroups": [ 00:21:42.506 "null", 00:21:42.506 "ffdhe2048", 00:21:42.506 "ffdhe3072", 00:21:42.506 "ffdhe4096", 00:21:42.506 "ffdhe6144", 00:21:42.506 "ffdhe8192" 00:21:42.506 ] 00:21:42.506 } 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "method": "bdev_nvme_set_hotplug", 00:21:42.506 "params": { 00:21:42.506 "period_us": 100000, 00:21:42.506 "enable": false 00:21:42.506 } 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "method": "bdev_malloc_create", 00:21:42.506 "params": { 00:21:42.506 "name": "malloc0", 00:21:42.506 "num_blocks": 8192, 00:21:42.506 "block_size": 4096, 00:21:42.506 "physical_block_size": 4096, 00:21:42.506 "uuid": "051f9789-fb73-4c15-8ea0-2a88531d1ff0", 00:21:42.506 "optimal_io_boundary": 0, 00:21:42.506 "md_size": 0, 00:21:42.506 "dif_type": 0, 00:21:42.506 "dif_is_head_of_md": false, 00:21:42.506 "dif_pi_format": 0 00:21:42.506 } 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "method": "bdev_wait_for_examine" 00:21:42.506 } 00:21:42.506 ] 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "subsystem": "nbd", 00:21:42.506 "config": [] 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "subsystem": "scheduler", 00:21:42.506 "config": [ 00:21:42.506 { 00:21:42.506 "method": "framework_set_scheduler", 00:21:42.506 "params": { 00:21:42.506 "name": "static" 00:21:42.506 } 00:21:42.506 } 00:21:42.506 ] 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "subsystem": "nvmf", 00:21:42.506 "config": [ 00:21:42.506 { 00:21:42.506 "method": "nvmf_set_config", 00:21:42.506 "params": { 00:21:42.506 "discovery_filter": "match_any", 00:21:42.506 "admin_cmd_passthru": { 00:21:42.506 "identify_ctrlr": false 00:21:42.506 }, 00:21:42.506 "dhchap_digests": [ 00:21:42.506 "sha256", 00:21:42.506 "sha384", 00:21:42.506 "sha512" 00:21:42.506 ], 00:21:42.506 "dhchap_dhgroups": [ 00:21:42.506 "null", 00:21:42.506 "ffdhe2048", 00:21:42.506 "ffdhe3072", 00:21:42.506 "ffdhe4096", 00:21:42.506 "ffdhe6144", 00:21:42.506 "ffdhe8192" 00:21:42.506 ] 00:21:42.506 } 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "method": "nvmf_set_max_subsystems", 00:21:42.506 "params": { 00:21:42.506 "max_subsystems": 1024 00:21:42.506 } 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "method": "nvmf_set_crdt", 00:21:42.506 "params": { 00:21:42.506 "crdt1": 0, 00:21:42.506 "crdt2": 0, 00:21:42.506 "crdt3": 0 00:21:42.506 } 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "method": "nvmf_create_transport", 00:21:42.506 "params": { 00:21:42.506 "trtype": "TCP", 00:21:42.506 "max_queue_depth": 128, 00:21:42.506 "max_io_qpairs_per_ctrlr": 127, 00:21:42.506 "in_capsule_data_size": 4096, 00:21:42.506 "max_io_size": 131072, 00:21:42.506 "io_unit_size": 131072, 00:21:42.506 "max_aq_depth": 128, 00:21:42.506 "num_shared_buffers": 511, 00:21:42.506 "buf_cache_size": 4294967295, 00:21:42.506 "dif_insert_or_strip": false, 00:21:42.506 "zcopy": false, 00:21:42.506 "c2h_success": false, 00:21:42.506 "sock_priority": 0, 00:21:42.506 "abort_timeout_sec": 1, 00:21:42.506 "ack_timeout": 0, 00:21:42.506 "data_wr_pool_size": 0 00:21:42.506 } 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "method": "nvmf_create_subsystem", 00:21:42.506 "params": { 00:21:42.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.506 "allow_any_host": false, 00:21:42.506 "serial_number": "SPDK00000000000001", 00:21:42.506 "model_number": "SPDK bdev Controller", 00:21:42.506 "max_namespaces": 10, 00:21:42.506 "min_cntlid": 1, 00:21:42.506 "max_cntlid": 65519, 00:21:42.506 "ana_reporting": false 00:21:42.506 } 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "method": "nvmf_subsystem_add_host", 00:21:42.506 "params": { 00:21:42.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.506 "host": "nqn.2016-06.io.spdk:host1", 00:21:42.506 "psk": "key0" 00:21:42.506 } 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "method": "nvmf_subsystem_add_ns", 00:21:42.506 "params": { 00:21:42.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.506 "namespace": { 00:21:42.506 "nsid": 1, 00:21:42.506 "bdev_name": "malloc0", 00:21:42.506 "nguid": "051F9789FB734C158EA02A88531D1FF0", 00:21:42.506 "uuid": "051f9789-fb73-4c15-8ea0-2a88531d1ff0", 00:21:42.506 "no_auto_visible": false 00:21:42.506 } 00:21:42.506 } 00:21:42.506 }, 00:21:42.506 { 00:21:42.506 "method": "nvmf_subsystem_add_listener", 00:21:42.506 "params": { 00:21:42.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.506 "listen_address": { 00:21:42.506 "trtype": "TCP", 00:21:42.506 "adrfam": "IPv4", 00:21:42.506 "traddr": "10.0.0.2", 00:21:42.506 "trsvcid": "4420" 00:21:42.506 }, 00:21:42.506 "secure_channel": true 00:21:42.506 } 00:21:42.506 } 00:21:42.506 ] 00:21:42.506 } 00:21:42.506 ] 00:21:42.506 }' 00:21:42.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:42.768 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:42.768 "subsystems": [ 00:21:42.768 { 00:21:42.768 "subsystem": "keyring", 00:21:42.768 "config": [ 00:21:42.768 { 00:21:42.768 "method": "keyring_file_add_key", 00:21:42.768 "params": { 00:21:42.768 "name": "key0", 00:21:42.768 "path": "/tmp/tmp.qiYKqJAW4a" 00:21:42.768 } 00:21:42.768 } 00:21:42.768 ] 00:21:42.768 }, 00:21:42.768 { 00:21:42.768 "subsystem": "iobuf", 00:21:42.768 "config": [ 00:21:42.768 { 00:21:42.768 "method": "iobuf_set_options", 00:21:42.768 "params": { 00:21:42.768 "small_pool_count": 8192, 00:21:42.768 "large_pool_count": 1024, 00:21:42.768 "small_bufsize": 8192, 00:21:42.768 "large_bufsize": 135168, 00:21:42.768 "enable_numa": false 00:21:42.768 } 00:21:42.768 } 00:21:42.768 ] 00:21:42.768 }, 00:21:42.768 { 00:21:42.768 "subsystem": "sock", 00:21:42.768 "config": [ 00:21:42.768 { 00:21:42.768 "method": "sock_set_default_impl", 00:21:42.768 "params": { 00:21:42.768 "impl_name": "posix" 00:21:42.768 } 00:21:42.768 }, 00:21:42.768 { 00:21:42.768 "method": "sock_impl_set_options", 00:21:42.768 "params": { 00:21:42.768 "impl_name": "ssl", 00:21:42.768 "recv_buf_size": 4096, 00:21:42.768 "send_buf_size": 4096, 00:21:42.768 "enable_recv_pipe": true, 00:21:42.768 "enable_quickack": false, 00:21:42.768 "enable_placement_id": 0, 00:21:42.768 "enable_zerocopy_send_server": true, 00:21:42.768 "enable_zerocopy_send_client": false, 00:21:42.768 "zerocopy_threshold": 0, 00:21:42.768 "tls_version": 0, 00:21:42.768 "enable_ktls": false 00:21:42.768 } 00:21:42.768 }, 00:21:42.768 { 00:21:42.768 "method": "sock_impl_set_options", 00:21:42.768 "params": { 00:21:42.768 "impl_name": "posix", 00:21:42.768 "recv_buf_size": 2097152, 00:21:42.768 "send_buf_size": 2097152, 00:21:42.768 "enable_recv_pipe": true, 00:21:42.768 "enable_quickack": false, 00:21:42.768 "enable_placement_id": 0, 00:21:42.768 "enable_zerocopy_send_server": true, 00:21:42.768 "enable_zerocopy_send_client": false, 00:21:42.768 "zerocopy_threshold": 0, 00:21:42.769 "tls_version": 0, 00:21:42.769 "enable_ktls": false 00:21:42.769 } 00:21:42.769 } 00:21:42.769 ] 00:21:42.769 }, 00:21:42.769 { 00:21:42.769 "subsystem": "vmd", 00:21:42.769 "config": [] 00:21:42.769 }, 00:21:42.769 { 00:21:42.769 "subsystem": "accel", 00:21:42.769 "config": [ 00:21:42.769 { 00:21:42.769 "method": "accel_set_options", 00:21:42.769 "params": { 00:21:42.769 "small_cache_size": 128, 00:21:42.769 "large_cache_size": 16, 00:21:42.769 "task_count": 2048, 00:21:42.769 "sequence_count": 2048, 00:21:42.769 "buf_count": 2048 00:21:42.769 } 00:21:42.769 } 00:21:42.769 ] 00:21:42.769 }, 00:21:42.769 { 00:21:42.769 "subsystem": "bdev", 00:21:42.769 "config": [ 00:21:42.769 { 00:21:42.769 "method": "bdev_set_options", 00:21:42.769 "params": { 00:21:42.769 "bdev_io_pool_size": 65535, 00:21:42.769 "bdev_io_cache_size": 256, 00:21:42.769 "bdev_auto_examine": true, 00:21:42.769 "iobuf_small_cache_size": 128, 00:21:42.769 "iobuf_large_cache_size": 16 00:21:42.769 } 00:21:42.769 }, 00:21:42.769 { 00:21:42.769 "method": "bdev_raid_set_options", 00:21:42.769 "params": { 00:21:42.769 "process_window_size_kb": 1024, 00:21:42.769 "process_max_bandwidth_mb_sec": 0 00:21:42.769 } 00:21:42.769 }, 00:21:42.769 { 00:21:42.769 "method": "bdev_iscsi_set_options", 00:21:42.769 "params": { 00:21:42.769 "timeout_sec": 30 00:21:42.769 } 00:21:42.769 }, 00:21:42.769 { 00:21:42.769 "method": "bdev_nvme_set_options", 00:21:42.769 "params": { 00:21:42.769 "action_on_timeout": "none", 00:21:42.769 "timeout_us": 0, 00:21:42.769 "timeout_admin_us": 0, 00:21:42.769 "keep_alive_timeout_ms": 10000, 00:21:42.769 "arbitration_burst": 0, 00:21:42.769 "low_priority_weight": 0, 00:21:42.769 "medium_priority_weight": 0, 00:21:42.769 "high_priority_weight": 0, 00:21:42.769 "nvme_adminq_poll_period_us": 10000, 00:21:42.769 "nvme_ioq_poll_period_us": 0, 00:21:42.769 "io_queue_requests": 512, 00:21:42.769 "delay_cmd_submit": true, 00:21:42.769 "transport_retry_count": 4, 00:21:42.769 "bdev_retry_count": 3, 00:21:42.769 "transport_ack_timeout": 0, 00:21:42.769 "ctrlr_loss_timeout_sec": 0, 00:21:42.769 "reconnect_delay_sec": 0, 00:21:42.769 "fast_io_fail_timeout_sec": 0, 00:21:42.769 "disable_auto_failback": false, 00:21:42.769 "generate_uuids": false, 00:21:42.769 "transport_tos": 0, 00:21:42.769 "nvme_error_stat": false, 00:21:42.769 "rdma_srq_size": 0, 00:21:42.769 "io_path_stat": false, 00:21:42.769 "allow_accel_sequence": false, 00:21:42.769 "rdma_max_cq_size": 0, 00:21:42.769 "rdma_cm_event_timeout_ms": 0, 00:21:42.769 "dhchap_digests": [ 00:21:42.769 "sha256", 00:21:42.769 "sha384", 00:21:42.769 "sha512" 00:21:42.769 ], 00:21:42.769 "dhchap_dhgroups": [ 00:21:42.769 "null", 00:21:42.769 "ffdhe2048", 00:21:42.769 "ffdhe3072", 00:21:42.769 "ffdhe4096", 00:21:42.769 "ffdhe6144", 00:21:42.769 "ffdhe8192" 00:21:42.769 ] 00:21:42.769 } 00:21:42.769 }, 00:21:42.769 { 00:21:42.769 "method": "bdev_nvme_attach_controller", 00:21:42.769 "params": { 00:21:42.769 "name": "TLSTEST", 00:21:42.769 "trtype": "TCP", 00:21:42.769 "adrfam": "IPv4", 00:21:42.769 "traddr": "10.0.0.2", 00:21:42.769 "trsvcid": "4420", 00:21:42.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.769 "prchk_reftag": false, 00:21:42.769 "prchk_guard": false, 00:21:42.769 "ctrlr_loss_timeout_sec": 0, 00:21:42.769 "reconnect_delay_sec": 0, 00:21:42.769 "fast_io_fail_timeout_sec": 0, 00:21:42.769 "psk": "key0", 00:21:42.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:42.769 "hdgst": false, 00:21:42.769 "ddgst": false, 00:21:42.769 "multipath": "multipath" 00:21:42.769 } 00:21:42.769 }, 00:21:42.769 { 00:21:42.769 "method": "bdev_nvme_set_hotplug", 00:21:42.769 "params": { 00:21:42.769 "period_us": 100000, 00:21:42.769 "enable": false 00:21:42.769 } 00:21:42.769 }, 00:21:42.769 { 00:21:42.769 "method": "bdev_wait_for_examine" 00:21:42.769 } 00:21:42.769 ] 00:21:42.769 }, 00:21:42.769 { 00:21:42.769 "subsystem": "nbd", 00:21:42.769 "config": [] 00:21:42.769 } 00:21:42.769 ] 00:21:42.769 }' 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3902022 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3902022 ']' 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3902022 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3902022 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3902022' 00:21:42.769 killing process with pid 3902022 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3902022 00:21:42.769 Received shutdown signal, test time was about 10.000000 seconds 00:21:42.769 00:21:42.769 Latency(us) 00:21:42.769 [2024-11-27T08:53:58.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.769 [2024-11-27T08:53:58.235Z] =================================================================================================================== 00:21:42.769 [2024-11-27T08:53:58.235Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3902022 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3901583 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3901583 ']' 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3901583 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.769 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3901583 00:21:43.029 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:43.029 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:43.029 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3901583' 00:21:43.029 killing process with pid 3901583 00:21:43.029 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3901583 00:21:43.029 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3901583 00:21:43.029 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:43.029 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:43.029 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.029 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.029 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:43.029 "subsystems": [ 00:21:43.029 { 00:21:43.029 "subsystem": "keyring", 00:21:43.029 "config": [ 00:21:43.029 { 00:21:43.029 "method": "keyring_file_add_key", 00:21:43.029 "params": { 00:21:43.029 "name": "key0", 00:21:43.029 "path": "/tmp/tmp.qiYKqJAW4a" 00:21:43.029 } 00:21:43.029 } 00:21:43.029 ] 00:21:43.029 }, 00:21:43.029 { 00:21:43.029 "subsystem": "iobuf", 00:21:43.029 "config": [ 00:21:43.029 { 00:21:43.029 "method": "iobuf_set_options", 00:21:43.029 "params": { 00:21:43.029 "small_pool_count": 8192, 00:21:43.029 "large_pool_count": 1024, 00:21:43.029 "small_bufsize": 8192, 00:21:43.029 "large_bufsize": 135168, 00:21:43.029 "enable_numa": false 00:21:43.029 } 00:21:43.029 } 00:21:43.029 ] 00:21:43.029 }, 00:21:43.029 { 00:21:43.029 "subsystem": "sock", 00:21:43.029 "config": [ 00:21:43.029 { 00:21:43.029 "method": "sock_set_default_impl", 00:21:43.029 "params": { 00:21:43.029 "impl_name": "posix" 00:21:43.029 } 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "method": "sock_impl_set_options", 00:21:43.030 "params": { 00:21:43.030 "impl_name": "ssl", 00:21:43.030 "recv_buf_size": 4096, 00:21:43.030 "send_buf_size": 4096, 00:21:43.030 "enable_recv_pipe": true, 00:21:43.030 "enable_quickack": false, 00:21:43.030 "enable_placement_id": 0, 00:21:43.030 "enable_zerocopy_send_server": true, 00:21:43.030 "enable_zerocopy_send_client": false, 00:21:43.030 "zerocopy_threshold": 0, 00:21:43.030 "tls_version": 0, 00:21:43.030 "enable_ktls": false 00:21:43.030 } 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "method": "sock_impl_set_options", 00:21:43.030 "params": { 00:21:43.030 "impl_name": "posix", 00:21:43.030 "recv_buf_size": 2097152, 00:21:43.030 "send_buf_size": 2097152, 00:21:43.030 "enable_recv_pipe": true, 00:21:43.030 "enable_quickack": false, 00:21:43.030 "enable_placement_id": 0, 00:21:43.030 "enable_zerocopy_send_server": true, 00:21:43.030 "enable_zerocopy_send_client": false, 00:21:43.030 "zerocopy_threshold": 0, 00:21:43.030 "tls_version": 0, 00:21:43.030 "enable_ktls": false 00:21:43.030 } 00:21:43.030 } 00:21:43.030 ] 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "subsystem": "vmd", 00:21:43.030 "config": [] 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "subsystem": "accel", 00:21:43.030 "config": [ 00:21:43.030 { 00:21:43.030 "method": "accel_set_options", 00:21:43.030 "params": { 00:21:43.030 "small_cache_size": 128, 00:21:43.030 "large_cache_size": 16, 00:21:43.030 "task_count": 2048, 00:21:43.030 "sequence_count": 2048, 00:21:43.030 "buf_count": 2048 00:21:43.030 } 00:21:43.030 } 00:21:43.030 ] 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "subsystem": "bdev", 00:21:43.030 "config": [ 00:21:43.030 { 00:21:43.030 "method": "bdev_set_options", 00:21:43.030 "params": { 00:21:43.030 "bdev_io_pool_size": 65535, 00:21:43.030 "bdev_io_cache_size": 256, 00:21:43.030 "bdev_auto_examine": true, 00:21:43.030 "iobuf_small_cache_size": 128, 00:21:43.030 "iobuf_large_cache_size": 16 00:21:43.030 } 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "method": "bdev_raid_set_options", 00:21:43.030 "params": { 00:21:43.030 "process_window_size_kb": 1024, 00:21:43.030 "process_max_bandwidth_mb_sec": 0 00:21:43.030 } 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "method": "bdev_iscsi_set_options", 00:21:43.030 "params": { 00:21:43.030 "timeout_sec": 30 00:21:43.030 } 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "method": "bdev_nvme_set_options", 00:21:43.030 "params": { 00:21:43.030 "action_on_timeout": "none", 00:21:43.030 "timeout_us": 0, 00:21:43.030 "timeout_admin_us": 0, 00:21:43.030 "keep_alive_timeout_ms": 10000, 00:21:43.030 "arbitration_burst": 0, 00:21:43.030 "low_priority_weight": 0, 00:21:43.030 "medium_priority_weight": 0, 00:21:43.030 "high_priority_weight": 0, 00:21:43.030 "nvme_adminq_poll_period_us": 10000, 00:21:43.030 "nvme_ioq_poll_period_us": 0, 00:21:43.030 "io_queue_requests": 0, 00:21:43.030 "delay_cmd_submit": true, 00:21:43.030 "transport_retry_count": 4, 00:21:43.030 "bdev_retry_count": 3, 00:21:43.030 "transport_ack_timeout": 0, 00:21:43.030 "ctrlr_loss_timeout_sec": 0, 00:21:43.030 "reconnect_delay_sec": 0, 00:21:43.030 "fast_io_fail_timeout_sec": 0, 00:21:43.030 "disable_auto_failback": false, 00:21:43.030 "generate_uuids": false, 00:21:43.030 "transport_tos": 0, 00:21:43.030 "nvme_error_stat": false, 00:21:43.030 "rdma_srq_size": 0, 00:21:43.030 "io_path_stat": false, 00:21:43.030 "allow_accel_sequence": false, 00:21:43.030 "rdma_max_cq_size": 0, 00:21:43.030 "rdma_cm_event_timeout_ms": 0, 00:21:43.030 "dhchap_digests": [ 00:21:43.030 "sha256", 00:21:43.030 "sha384", 00:21:43.030 "sha512" 00:21:43.030 ], 00:21:43.030 "dhchap_dhgroups": [ 00:21:43.030 "null", 00:21:43.030 "ffdhe2048", 00:21:43.030 "ffdhe3072", 00:21:43.030 "ffdhe4096", 00:21:43.030 "ffdhe6144", 00:21:43.030 "ffdhe8192" 00:21:43.030 ] 00:21:43.030 } 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "method": "bdev_nvme_set_hotplug", 00:21:43.030 "params": { 00:21:43.030 "period_us": 100000, 00:21:43.030 "enable": false 00:21:43.030 } 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "method": "bdev_malloc_create", 00:21:43.030 "params": { 00:21:43.030 "name": "malloc0", 00:21:43.030 "num_blocks": 8192, 00:21:43.030 "block_size": 4096, 00:21:43.030 "physical_block_size": 4096, 00:21:43.030 "uuid": "051f9789-fb73-4c15-8ea0-2a88531d1ff0", 00:21:43.030 "optimal_io_boundary": 0, 00:21:43.030 "md_size": 0, 00:21:43.030 "dif_type": 0, 00:21:43.030 "dif_is_head_of_md": false, 00:21:43.030 "dif_pi_format": 0 00:21:43.030 } 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "method": "bdev_wait_for_examine" 00:21:43.030 } 00:21:43.030 ] 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "subsystem": "nbd", 00:21:43.030 "config": [] 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "subsystem": "scheduler", 00:21:43.030 "config": [ 00:21:43.030 { 00:21:43.030 "method": "framework_set_scheduler", 00:21:43.030 "params": { 00:21:43.030 "name": "static" 00:21:43.030 } 00:21:43.030 } 00:21:43.030 ] 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "subsystem": "nvmf", 00:21:43.030 "config": [ 00:21:43.030 { 00:21:43.030 "method": "nvmf_set_config", 00:21:43.030 "params": { 00:21:43.030 "discovery_filter": "match_any", 00:21:43.030 "admin_cmd_passthru": { 00:21:43.030 "identify_ctrlr": false 00:21:43.030 }, 00:21:43.030 "dhchap_digests": [ 00:21:43.030 "sha256", 00:21:43.030 "sha384", 00:21:43.030 "sha512" 00:21:43.030 ], 00:21:43.030 "dhchap_dhgroups": [ 00:21:43.030 "null", 00:21:43.030 "ffdhe2048", 00:21:43.030 "ffdhe3072", 00:21:43.030 "ffdhe4096", 00:21:43.030 "ffdhe6144", 00:21:43.030 "ffdhe8192" 00:21:43.030 ] 00:21:43.030 } 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "method": "nvmf_set_max_subsystems", 00:21:43.030 "params": { 00:21:43.030 "max_subsystems": 1024 00:21:43.030 } 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "method": "nvmf_set_crdt", 00:21:43.030 "params": { 00:21:43.030 "crdt1": 0, 00:21:43.030 "crdt2": 0, 00:21:43.030 "crdt3": 0 00:21:43.030 } 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "method": "nvmf_create_transport", 00:21:43.030 "params": { 00:21:43.030 "trtype": "TCP", 00:21:43.030 "max_queue_depth": 128, 00:21:43.030 "max_io_qpairs_per_ctrlr": 127, 00:21:43.030 "in_capsule_data_size": 4096, 00:21:43.030 "max_io_size": 131072, 00:21:43.030 "io_unit_size": 131072, 00:21:43.030 "max_aq_depth": 128, 00:21:43.030 "num_shared_buffers": 511, 00:21:43.030 "buf_cache_size": 4294967295, 00:21:43.030 "dif_insert_or_strip": false, 00:21:43.030 "zcopy": false, 00:21:43.030 "c2h_success": false, 00:21:43.030 "sock_priority": 0, 00:21:43.030 "abort_timeout_sec": 1, 00:21:43.030 "ack_timeout": 0, 00:21:43.030 "data_wr_pool_size": 0 00:21:43.030 } 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "method": "nvmf_create_subsystem", 00:21:43.030 "params": { 00:21:43.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.030 "allow_any_host": false, 00:21:43.030 "serial_number": "SPDK00000000000001", 00:21:43.030 "model_number": "SPDK bdev Controller", 00:21:43.030 "max_namespaces": 10, 00:21:43.030 "min_cntlid": 1, 00:21:43.030 "max_cntlid": 65519, 00:21:43.030 "ana_reporting": false 00:21:43.030 } 00:21:43.030 }, 00:21:43.030 { 00:21:43.030 "method": "nvmf_subsystem_add_host", 00:21:43.030 "params": { 00:21:43.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.031 "host": "nqn.2016-06.io.spdk:host1", 00:21:43.031 "psk": "key0" 00:21:43.031 } 00:21:43.031 }, 00:21:43.031 { 00:21:43.031 "method": "nvmf_subsystem_add_ns", 00:21:43.031 "params": { 00:21:43.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.031 "namespace": { 00:21:43.031 "nsid": 1, 00:21:43.031 "bdev_name": "malloc0", 00:21:43.031 "nguid": "051F9789FB734C158EA02A88531D1FF0", 00:21:43.031 "uuid": "051f9789-fb73-4c15-8ea0-2a88531d1ff0", 00:21:43.031 "no_auto_visible": false 00:21:43.031 } 00:21:43.031 } 00:21:43.031 }, 00:21:43.031 { 00:21:43.031 "method": "nvmf_subsystem_add_listener", 00:21:43.031 "params": { 00:21:43.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.031 "listen_address": { 00:21:43.031 "trtype": "TCP", 00:21:43.031 "adrfam": "IPv4", 00:21:43.031 "traddr": "10.0.0.2", 00:21:43.031 "trsvcid": "4420" 00:21:43.031 }, 00:21:43.031 "secure_channel": true 00:21:43.031 } 00:21:43.031 } 00:21:43.031 ] 00:21:43.031 } 00:21:43.031 ] 00:21:43.031 }' 00:21:43.031 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3902382 00:21:43.031 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3902382 00:21:43.031 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:43.031 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3902382 ']' 00:21:43.031 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.031 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.031 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.031 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.031 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.031 [2024-11-27 09:53:58.463289] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:43.031 [2024-11-27 09:53:58.463352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.291 [2024-11-27 09:53:58.552075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.291 [2024-11-27 09:53:58.581715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.291 [2024-11-27 09:53:58.581745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.291 [2024-11-27 09:53:58.581751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.291 [2024-11-27 09:53:58.581755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.291 [2024-11-27 09:53:58.581759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.291 [2024-11-27 09:53:58.582241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.552 [2024-11-27 09:53:58.774570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.552 [2024-11-27 09:53:58.806596] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:43.552 [2024-11-27 09:53:58.806799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.812 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.812 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:43.812 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:43.812 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.812 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.072 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.072 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3902667 00:21:44.072 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3902667 /var/tmp/bdevperf.sock 00:21:44.072 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3902667 ']' 00:21:44.072 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.072 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.072 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.072 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:44.072 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.072 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.072 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:44.072 "subsystems": [ 00:21:44.072 { 00:21:44.072 "subsystem": "keyring", 00:21:44.072 "config": [ 00:21:44.072 { 00:21:44.072 "method": "keyring_file_add_key", 00:21:44.072 "params": { 00:21:44.072 "name": "key0", 00:21:44.072 "path": "/tmp/tmp.qiYKqJAW4a" 00:21:44.072 } 00:21:44.072 } 00:21:44.072 ] 00:21:44.072 }, 00:21:44.072 { 00:21:44.072 "subsystem": "iobuf", 00:21:44.072 "config": [ 00:21:44.072 { 00:21:44.072 "method": "iobuf_set_options", 00:21:44.072 "params": { 00:21:44.072 "small_pool_count": 8192, 00:21:44.072 "large_pool_count": 1024, 00:21:44.072 "small_bufsize": 8192, 00:21:44.072 "large_bufsize": 135168, 00:21:44.072 "enable_numa": false 00:21:44.072 } 00:21:44.072 } 00:21:44.072 ] 00:21:44.072 }, 00:21:44.072 { 00:21:44.072 "subsystem": "sock", 00:21:44.072 "config": [ 00:21:44.072 { 00:21:44.072 "method": "sock_set_default_impl", 00:21:44.072 "params": { 00:21:44.072 "impl_name": "posix" 00:21:44.072 } 00:21:44.072 }, 00:21:44.072 { 00:21:44.072 "method": "sock_impl_set_options", 00:21:44.072 "params": { 00:21:44.072 "impl_name": "ssl", 00:21:44.072 "recv_buf_size": 4096, 00:21:44.072 "send_buf_size": 4096, 00:21:44.072 "enable_recv_pipe": true, 00:21:44.072 "enable_quickack": false, 00:21:44.072 "enable_placement_id": 0, 00:21:44.072 "enable_zerocopy_send_server": true, 00:21:44.072 "enable_zerocopy_send_client": false, 00:21:44.072 "zerocopy_threshold": 0, 00:21:44.072 "tls_version": 0, 00:21:44.072 "enable_ktls": false 00:21:44.072 } 00:21:44.072 }, 00:21:44.072 { 00:21:44.072 "method": "sock_impl_set_options", 00:21:44.072 "params": { 00:21:44.072 "impl_name": "posix", 00:21:44.072 "recv_buf_size": 2097152, 00:21:44.072 "send_buf_size": 2097152, 00:21:44.072 "enable_recv_pipe": true, 00:21:44.072 "enable_quickack": false, 00:21:44.072 "enable_placement_id": 0, 00:21:44.072 "enable_zerocopy_send_server": true, 00:21:44.072 "enable_zerocopy_send_client": false, 00:21:44.072 "zerocopy_threshold": 0, 00:21:44.072 "tls_version": 0, 00:21:44.072 "enable_ktls": false 00:21:44.072 } 00:21:44.072 } 00:21:44.072 ] 00:21:44.072 }, 00:21:44.072 { 00:21:44.072 "subsystem": "vmd", 00:21:44.072 "config": [] 00:21:44.072 }, 00:21:44.072 { 00:21:44.072 "subsystem": "accel", 00:21:44.072 "config": [ 00:21:44.072 { 00:21:44.072 "method": "accel_set_options", 00:21:44.072 "params": { 00:21:44.072 "small_cache_size": 128, 00:21:44.072 "large_cache_size": 16, 00:21:44.072 "task_count": 2048, 00:21:44.072 "sequence_count": 2048, 00:21:44.072 "buf_count": 2048 00:21:44.072 } 00:21:44.072 } 00:21:44.072 ] 00:21:44.072 }, 00:21:44.072 { 00:21:44.072 "subsystem": "bdev", 00:21:44.072 "config": [ 00:21:44.072 { 00:21:44.072 "method": "bdev_set_options", 00:21:44.072 "params": { 00:21:44.072 "bdev_io_pool_size": 65535, 00:21:44.072 "bdev_io_cache_size": 256, 00:21:44.072 "bdev_auto_examine": true, 00:21:44.072 "iobuf_small_cache_size": 128, 00:21:44.072 "iobuf_large_cache_size": 16 00:21:44.072 } 00:21:44.072 }, 00:21:44.072 { 00:21:44.072 "method": "bdev_raid_set_options", 00:21:44.072 "params": { 00:21:44.072 "process_window_size_kb": 1024, 00:21:44.072 "process_max_bandwidth_mb_sec": 0 00:21:44.072 } 00:21:44.072 }, 00:21:44.072 { 00:21:44.072 "method": "bdev_iscsi_set_options", 00:21:44.072 "params": { 00:21:44.072 "timeout_sec": 30 00:21:44.072 } 00:21:44.072 }, 00:21:44.072 { 00:21:44.072 "method": "bdev_nvme_set_options", 00:21:44.072 "params": { 00:21:44.072 "action_on_timeout": "none", 00:21:44.072 "timeout_us": 0, 00:21:44.072 "timeout_admin_us": 0, 00:21:44.072 "keep_alive_timeout_ms": 10000, 00:21:44.072 "arbitration_burst": 0, 00:21:44.072 "low_priority_weight": 0, 00:21:44.072 "medium_priority_weight": 0, 00:21:44.072 "high_priority_weight": 0, 00:21:44.072 "nvme_adminq_poll_period_us": 10000, 00:21:44.072 "nvme_ioq_poll_period_us": 0, 00:21:44.072 "io_queue_requests": 512, 00:21:44.072 "delay_cmd_submit": true, 00:21:44.072 "transport_retry_count": 4, 00:21:44.072 "bdev_retry_count": 3, 00:21:44.072 "transport_ack_timeout": 0, 00:21:44.072 "ctrlr_loss_timeout_sec": 0, 00:21:44.072 "reconnect_delay_sec": 0, 00:21:44.072 "fast_io_fail_timeout_sec": 0, 00:21:44.072 "disable_auto_failback": false, 00:21:44.072 "generate_uuids": false, 00:21:44.072 "transport_tos": 0, 00:21:44.072 "nvme_error_stat": false, 00:21:44.072 "rdma_srq_size": 0, 00:21:44.072 "io_path_stat": false, 00:21:44.072 "allow_accel_sequence": false, 00:21:44.072 "rdma_max_cq_size": 0, 00:21:44.072 "rdma_cm_event_timeout_ms": 0, 00:21:44.072 "dhchap_digests": [ 00:21:44.072 "sha256", 00:21:44.072 "sha384", 00:21:44.072 "sha512" 00:21:44.072 ], 00:21:44.072 "dhchap_dhgroups": [ 00:21:44.072 "null", 00:21:44.072 "ffdhe2048", 00:21:44.072 "ffdhe3072", 00:21:44.072 "ffdhe4096", 00:21:44.072 "ffdhe6144", 00:21:44.072 "ffdhe8192" 00:21:44.072 ] 00:21:44.072 } 00:21:44.072 }, 00:21:44.072 { 00:21:44.072 "method": "bdev_nvme_attach_controller", 00:21:44.072 "params": { 00:21:44.072 "name": "TLSTEST", 00:21:44.072 "trtype": "TCP", 00:21:44.072 "adrfam": "IPv4", 00:21:44.072 "traddr": "10.0.0.2", 00:21:44.072 "trsvcid": "4420", 00:21:44.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.072 "prchk_reftag": false, 00:21:44.072 "prchk_guard": false, 00:21:44.072 "ctrlr_loss_timeout_sec": 0, 00:21:44.072 "reconnect_delay_sec": 0, 00:21:44.072 "fast_io_fail_timeout_sec": 0, 00:21:44.072 "psk": "key0", 00:21:44.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.072 "hdgst": false, 00:21:44.072 "ddgst": false, 00:21:44.072 "multipath": "multipath" 00:21:44.072 } 00:21:44.072 }, 00:21:44.072 { 00:21:44.073 "method": "bdev_nvme_set_hotplug", 00:21:44.073 "params": { 00:21:44.073 "period_us": 100000, 00:21:44.073 "enable": false 00:21:44.073 } 00:21:44.073 }, 00:21:44.073 { 00:21:44.073 "method": "bdev_wait_for_examine" 00:21:44.073 } 00:21:44.073 ] 00:21:44.073 }, 00:21:44.073 { 00:21:44.073 "subsystem": "nbd", 00:21:44.073 "config": [] 00:21:44.073 } 00:21:44.073 ] 00:21:44.073 }' 00:21:44.073 [2024-11-27 09:53:59.329189] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:44.073 [2024-11-27 09:53:59.329242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902667 ] 00:21:44.073 [2024-11-27 09:53:59.418786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.073 [2024-11-27 09:53:59.453953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.332 [2024-11-27 09:53:59.593234] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.901 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.901 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:44.901 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:44.901 Running I/O for 10 seconds... 00:21:46.782 4796.00 IOPS, 18.73 MiB/s [2024-11-27T08:54:03.632Z] 4953.00 IOPS, 19.35 MiB/s [2024-11-27T08:54:04.574Z] 5268.33 IOPS, 20.58 MiB/s [2024-11-27T08:54:05.515Z] 5361.75 IOPS, 20.94 MiB/s [2024-11-27T08:54:06.456Z] 5520.40 IOPS, 21.56 MiB/s [2024-11-27T08:54:07.396Z] 5612.33 IOPS, 21.92 MiB/s [2024-11-27T08:54:08.338Z] 5628.14 IOPS, 21.98 MiB/s [2024-11-27T08:54:09.306Z] 5699.12 IOPS, 22.26 MiB/s [2024-11-27T08:54:10.392Z] 5695.00 IOPS, 22.25 MiB/s [2024-11-27T08:54:10.392Z] 5758.10 IOPS, 22.49 MiB/s 00:21:54.926 Latency(us) 00:21:54.926 [2024-11-27T08:54:10.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.926 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:54.926 Verification LBA range: start 0x0 length 0x2000 00:21:54.926 TLSTESTn1 : 10.02 5761.07 22.50 0.00 0.00 22183.28 5570.56 69031.25 00:21:54.926 [2024-11-27T08:54:10.392Z] =================================================================================================================== 00:21:54.926 [2024-11-27T08:54:10.392Z] Total : 5761.07 22.50 0.00 0.00 22183.28 5570.56 69031.25 00:21:54.926 { 00:21:54.926 "results": [ 00:21:54.926 { 00:21:54.926 "job": "TLSTESTn1", 00:21:54.926 "core_mask": "0x4", 00:21:54.926 "workload": "verify", 00:21:54.926 "status": "finished", 00:21:54.926 "verify_range": { 00:21:54.926 "start": 0, 00:21:54.926 "length": 8192 00:21:54.926 }, 00:21:54.926 "queue_depth": 128, 00:21:54.926 "io_size": 4096, 00:21:54.926 "runtime": 10.016887, 00:21:54.926 "iops": 5761.071278931268, 00:21:54.926 "mibps": 22.504184683325267, 00:21:54.926 "io_failed": 0, 00:21:54.926 "io_timeout": 0, 00:21:54.926 "avg_latency_us": 22183.279831796863, 00:21:54.926 "min_latency_us": 5570.56, 00:21:54.926 "max_latency_us": 69031.25333333333 00:21:54.926 } 00:21:54.926 ], 00:21:54.926 "core_count": 1 00:21:54.926 } 00:21:54.926 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:54.926 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3902667 00:21:54.926 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3902667 ']' 00:21:54.927 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3902667 00:21:54.927 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:54.927 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.927 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3902667 00:21:54.927 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:54.927 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:54.927 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3902667' 00:21:54.927 killing process with pid 3902667 00:21:54.927 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3902667 00:21:54.927 Received shutdown signal, test time was about 10.000000 seconds 00:21:54.927 00:21:54.927 Latency(us) 00:21:54.927 [2024-11-27T08:54:10.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.927 [2024-11-27T08:54:10.393Z] =================================================================================================================== 00:21:54.927 [2024-11-27T08:54:10.393Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.927 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3902667 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3902382 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3902382 ']' 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3902382 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3902382 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3902382' 00:21:55.274 killing process with pid 3902382 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3902382 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3902382 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3905231 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3905231 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3905231 ']' 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.274 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.274 [2024-11-27 09:54:10.660937] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:55.274 [2024-11-27 09:54:10.660993] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.535 [2024-11-27 09:54:10.754632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.535 [2024-11-27 09:54:10.789842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.535 [2024-11-27 09:54:10.789873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.535 [2024-11-27 09:54:10.789882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.535 [2024-11-27 09:54:10.789889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.535 [2024-11-27 09:54:10.789895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.535 [2024-11-27 09:54:10.790443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.134 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.134 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:56.134 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:56.134 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:56.134 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.134 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.134 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.qiYKqJAW4a 00:21:56.134 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qiYKqJAW4a 00:21:56.134 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:56.397 [2024-11-27 09:54:11.639006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.397 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:56.397 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:56.658 [2024-11-27 09:54:11.979863] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:56.658 [2024-11-27 09:54:11.980231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.658 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:56.920 malloc0 00:21:56.920 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:56.920 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qiYKqJAW4a 00:21:57.182 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:57.442 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3905720 00:21:57.442 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:57.442 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:57.442 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3905720 /var/tmp/bdevperf.sock 00:21:57.442 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3905720 ']' 00:21:57.442 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.442 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.442 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.442 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.442 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.442 [2024-11-27 09:54:12.792671] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:21:57.442 [2024-11-27 09:54:12.792744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905720 ] 00:21:57.442 [2024-11-27 09:54:12.879824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.702 [2024-11-27 09:54:12.914557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.272 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.272 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:58.272 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qiYKqJAW4a 00:21:58.272 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:58.533 [2024-11-27 09:54:13.852279] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.533 nvme0n1 00:21:58.533 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:58.793 Running I/O for 1 seconds... 00:21:59.736 5022.00 IOPS, 19.62 MiB/s 00:21:59.736 Latency(us) 00:21:59.736 [2024-11-27T08:54:15.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.736 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:59.736 Verification LBA range: start 0x0 length 0x2000 00:21:59.736 nvme0n1 : 1.02 5064.72 19.78 0.00 0.00 25077.63 6444.37 29272.75 00:21:59.736 [2024-11-27T08:54:15.202Z] =================================================================================================================== 00:21:59.736 [2024-11-27T08:54:15.202Z] Total : 5064.72 19.78 0.00 0.00 25077.63 6444.37 29272.75 00:21:59.736 { 00:21:59.736 "results": [ 00:21:59.736 { 00:21:59.736 "job": "nvme0n1", 00:21:59.736 "core_mask": "0x2", 00:21:59.736 "workload": "verify", 00:21:59.736 "status": "finished", 00:21:59.736 "verify_range": { 00:21:59.736 "start": 0, 00:21:59.736 "length": 8192 00:21:59.736 }, 00:21:59.736 "queue_depth": 128, 00:21:59.736 "io_size": 4096, 00:21:59.736 "runtime": 1.016839, 00:21:59.736 "iops": 5064.715259741218, 00:21:59.736 "mibps": 19.784043983364132, 00:21:59.736 "io_failed": 0, 00:21:59.736 "io_timeout": 0, 00:21:59.736 "avg_latency_us": 25077.634071197408, 00:21:59.736 "min_latency_us": 6444.373333333333, 00:21:59.736 "max_latency_us": 29272.746666666666 00:21:59.736 } 00:21:59.736 ], 00:21:59.736 "core_count": 1 00:21:59.736 } 00:21:59.736 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3905720 00:21:59.736 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3905720 ']' 00:21:59.736 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3905720 00:21:59.736 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:59.736 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.736 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3905720 00:21:59.736 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:59.736 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:59.736 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3905720' 00:21:59.736 killing process with pid 3905720 00:21:59.736 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3905720 00:21:59.736 Received shutdown signal, test time was about 1.000000 seconds 00:21:59.736 00:21:59.736 Latency(us) 00:21:59.736 [2024-11-27T08:54:15.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.736 [2024-11-27T08:54:15.202Z] =================================================================================================================== 00:21:59.736 [2024-11-27T08:54:15.202Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.736 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3905720 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3905231 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3905231 ']' 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3905231 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3905231 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3905231' 00:21:59.997 killing process with pid 3905231 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3905231 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3905231 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3906369 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3906369 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3906369 ']' 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.997 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.258 [2024-11-27 09:54:15.510153] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:22:00.258 [2024-11-27 09:54:15.510228] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.258 [2024-11-27 09:54:15.606961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.258 [2024-11-27 09:54:15.656946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.258 [2024-11-27 09:54:15.657002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.258 [2024-11-27 09:54:15.657010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.258 [2024-11-27 09:54:15.657018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.258 [2024-11-27 09:54:15.657024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.258 [2024-11-27 09:54:15.657773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.202 [2024-11-27 09:54:16.359086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.202 malloc0 00:22:01.202 [2024-11-27 09:54:16.389209] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.202 [2024-11-27 09:54:16.389569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3906500 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3906500 /var/tmp/bdevperf.sock 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3906500 ']' 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.202 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.202 [2024-11-27 09:54:16.483146] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:22:01.202 [2024-11-27 09:54:16.483231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3906500 ] 00:22:01.202 [2024-11-27 09:54:16.571898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.202 [2024-11-27 09:54:16.605539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:02.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qiYKqJAW4a 00:22:02.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:02.143 [2024-11-27 09:54:17.570915] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.403 nvme0n1 00:22:02.403 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:02.403 Running I/O for 1 seconds... 00:22:03.347 4110.00 IOPS, 16.05 MiB/s 00:22:03.347 Latency(us) 00:22:03.347 [2024-11-27T08:54:18.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.347 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:03.347 Verification LBA range: start 0x0 length 0x2000 00:22:03.347 nvme0n1 : 1.02 4161.46 16.26 0.00 0.00 30520.40 6444.37 65099.09 00:22:03.347 [2024-11-27T08:54:18.813Z] =================================================================================================================== 00:22:03.347 [2024-11-27T08:54:18.813Z] Total : 4161.46 16.26 0.00 0.00 30520.40 6444.37 65099.09 00:22:03.347 { 00:22:03.347 "results": [ 00:22:03.347 { 00:22:03.347 "job": "nvme0n1", 00:22:03.347 "core_mask": "0x2", 00:22:03.347 "workload": "verify", 00:22:03.347 "status": "finished", 00:22:03.347 "verify_range": { 00:22:03.347 "start": 0, 00:22:03.347 "length": 8192 00:22:03.347 }, 00:22:03.347 "queue_depth": 128, 00:22:03.347 "io_size": 4096, 00:22:03.347 "runtime": 1.018393, 00:22:03.347 "iops": 4161.458297533467, 00:22:03.347 "mibps": 16.255696474740105, 00:22:03.347 "io_failed": 0, 00:22:03.347 "io_timeout": 0, 00:22:03.347 "avg_latency_us": 30520.39651407897, 00:22:03.347 "min_latency_us": 6444.373333333333, 00:22:03.347 "max_latency_us": 65099.09333333333 00:22:03.347 } 00:22:03.347 ], 00:22:03.347 "core_count": 1 00:22:03.347 } 00:22:03.347 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:22:03.347 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.347 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.608 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.608 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:22:03.608 "subsystems": [ 00:22:03.608 { 00:22:03.608 "subsystem": "keyring", 00:22:03.608 "config": [ 00:22:03.608 { 00:22:03.608 "method": "keyring_file_add_key", 00:22:03.608 "params": { 00:22:03.608 "name": "key0", 00:22:03.608 "path": "/tmp/tmp.qiYKqJAW4a" 00:22:03.608 } 00:22:03.608 } 00:22:03.608 ] 00:22:03.608 }, 00:22:03.608 { 00:22:03.608 "subsystem": "iobuf", 00:22:03.608 "config": [ 00:22:03.608 { 00:22:03.608 "method": "iobuf_set_options", 00:22:03.608 "params": { 00:22:03.608 "small_pool_count": 8192, 00:22:03.608 "large_pool_count": 1024, 00:22:03.608 "small_bufsize": 8192, 00:22:03.608 "large_bufsize": 135168, 00:22:03.608 "enable_numa": false 00:22:03.608 } 00:22:03.608 } 00:22:03.608 ] 00:22:03.608 }, 00:22:03.608 { 00:22:03.608 "subsystem": "sock", 00:22:03.608 "config": [ 00:22:03.608 { 00:22:03.608 "method": "sock_set_default_impl", 00:22:03.608 "params": { 00:22:03.608 "impl_name": "posix" 00:22:03.608 } 00:22:03.608 }, 00:22:03.608 { 00:22:03.608 "method": "sock_impl_set_options", 00:22:03.608 "params": { 00:22:03.608 "impl_name": "ssl", 00:22:03.608 "recv_buf_size": 4096, 00:22:03.608 "send_buf_size": 4096, 00:22:03.608 "enable_recv_pipe": true, 00:22:03.608 "enable_quickack": false, 00:22:03.608 "enable_placement_id": 0, 00:22:03.608 "enable_zerocopy_send_server": true, 00:22:03.608 "enable_zerocopy_send_client": false, 00:22:03.608 "zerocopy_threshold": 0, 00:22:03.608 "tls_version": 0, 00:22:03.608 "enable_ktls": false 00:22:03.608 } 00:22:03.608 }, 00:22:03.608 { 00:22:03.608 "method": "sock_impl_set_options", 00:22:03.608 "params": { 00:22:03.608 "impl_name": "posix", 00:22:03.608 "recv_buf_size": 2097152, 00:22:03.608 "send_buf_size": 2097152, 00:22:03.608 "enable_recv_pipe": true, 00:22:03.608 "enable_quickack": false, 00:22:03.608 "enable_placement_id": 0, 00:22:03.608 "enable_zerocopy_send_server": true, 00:22:03.608 "enable_zerocopy_send_client": false, 00:22:03.608 "zerocopy_threshold": 0, 00:22:03.608 "tls_version": 0, 00:22:03.608 "enable_ktls": false 00:22:03.608 } 00:22:03.608 } 00:22:03.608 ] 00:22:03.608 }, 00:22:03.608 { 00:22:03.608 "subsystem": "vmd", 00:22:03.608 "config": [] 00:22:03.608 }, 00:22:03.608 { 00:22:03.608 "subsystem": "accel", 00:22:03.608 "config": [ 00:22:03.608 { 00:22:03.608 "method": "accel_set_options", 00:22:03.608 "params": { 00:22:03.608 "small_cache_size": 128, 00:22:03.608 "large_cache_size": 16, 00:22:03.608 "task_count": 2048, 00:22:03.608 "sequence_count": 2048, 00:22:03.608 "buf_count": 2048 00:22:03.608 } 00:22:03.608 } 00:22:03.608 ] 00:22:03.608 }, 00:22:03.608 { 00:22:03.608 "subsystem": "bdev", 00:22:03.608 "config": [ 00:22:03.608 { 00:22:03.608 "method": "bdev_set_options", 00:22:03.608 "params": { 00:22:03.608 "bdev_io_pool_size": 65535, 00:22:03.608 "bdev_io_cache_size": 256, 00:22:03.608 "bdev_auto_examine": true, 00:22:03.608 "iobuf_small_cache_size": 128, 00:22:03.608 "iobuf_large_cache_size": 16 00:22:03.608 } 00:22:03.608 }, 00:22:03.608 { 00:22:03.608 "method": "bdev_raid_set_options", 00:22:03.608 "params": { 00:22:03.608 "process_window_size_kb": 1024, 00:22:03.608 "process_max_bandwidth_mb_sec": 0 00:22:03.608 } 00:22:03.608 }, 00:22:03.608 { 00:22:03.608 "method": "bdev_iscsi_set_options", 00:22:03.608 "params": { 00:22:03.608 "timeout_sec": 30 00:22:03.608 } 00:22:03.608 }, 00:22:03.608 { 00:22:03.608 "method": "bdev_nvme_set_options", 00:22:03.608 "params": { 00:22:03.608 "action_on_timeout": "none", 00:22:03.608 "timeout_us": 0, 00:22:03.608 "timeout_admin_us": 0, 00:22:03.608 "keep_alive_timeout_ms": 10000, 00:22:03.608 "arbitration_burst": 0, 00:22:03.608 "low_priority_weight": 0, 00:22:03.608 "medium_priority_weight": 0, 00:22:03.608 "high_priority_weight": 0, 00:22:03.608 "nvme_adminq_poll_period_us": 10000, 00:22:03.608 "nvme_ioq_poll_period_us": 0, 00:22:03.608 "io_queue_requests": 0, 00:22:03.608 "delay_cmd_submit": true, 00:22:03.608 "transport_retry_count": 4, 00:22:03.608 "bdev_retry_count": 3, 00:22:03.608 "transport_ack_timeout": 0, 00:22:03.608 "ctrlr_loss_timeout_sec": 0, 00:22:03.608 "reconnect_delay_sec": 0, 00:22:03.608 "fast_io_fail_timeout_sec": 0, 00:22:03.608 "disable_auto_failback": false, 00:22:03.608 "generate_uuids": false, 00:22:03.608 "transport_tos": 0, 00:22:03.608 "nvme_error_stat": false, 00:22:03.608 "rdma_srq_size": 0, 00:22:03.608 "io_path_stat": false, 00:22:03.608 "allow_accel_sequence": false, 00:22:03.608 "rdma_max_cq_size": 0, 00:22:03.609 "rdma_cm_event_timeout_ms": 0, 00:22:03.609 "dhchap_digests": [ 00:22:03.609 "sha256", 00:22:03.609 "sha384", 00:22:03.609 "sha512" 00:22:03.609 ], 00:22:03.609 "dhchap_dhgroups": [ 00:22:03.609 "null", 00:22:03.609 "ffdhe2048", 00:22:03.609 "ffdhe3072", 00:22:03.609 "ffdhe4096", 00:22:03.609 "ffdhe6144", 00:22:03.609 "ffdhe8192" 00:22:03.609 ] 00:22:03.609 } 00:22:03.609 }, 00:22:03.609 { 00:22:03.609 "method": "bdev_nvme_set_hotplug", 00:22:03.609 "params": { 00:22:03.609 "period_us": 100000, 00:22:03.609 "enable": false 00:22:03.609 } 00:22:03.609 }, 00:22:03.609 { 00:22:03.609 "method": "bdev_malloc_create", 00:22:03.609 "params": { 00:22:03.609 "name": "malloc0", 00:22:03.609 "num_blocks": 8192, 00:22:03.609 "block_size": 4096, 00:22:03.609 "physical_block_size": 4096, 00:22:03.609 "uuid": "23d54889-a761-4e60-97ff-ea901ba0963d", 00:22:03.609 "optimal_io_boundary": 0, 00:22:03.609 "md_size": 0, 00:22:03.609 "dif_type": 0, 00:22:03.609 "dif_is_head_of_md": false, 00:22:03.609 "dif_pi_format": 0 00:22:03.609 } 00:22:03.609 }, 00:22:03.609 { 00:22:03.609 "method": "bdev_wait_for_examine" 00:22:03.609 } 00:22:03.609 ] 00:22:03.609 }, 00:22:03.609 { 00:22:03.609 "subsystem": "nbd", 00:22:03.609 "config": [] 00:22:03.609 }, 00:22:03.609 { 00:22:03.609 "subsystem": "scheduler", 00:22:03.609 "config": [ 00:22:03.609 { 00:22:03.609 "method": "framework_set_scheduler", 00:22:03.609 "params": { 00:22:03.609 "name": "static" 00:22:03.609 } 00:22:03.609 } 00:22:03.609 ] 00:22:03.609 }, 00:22:03.609 { 00:22:03.609 "subsystem": "nvmf", 00:22:03.609 "config": [ 00:22:03.609 { 00:22:03.609 "method": "nvmf_set_config", 00:22:03.609 "params": { 00:22:03.609 "discovery_filter": "match_any", 00:22:03.609 "admin_cmd_passthru": { 00:22:03.609 "identify_ctrlr": false 00:22:03.609 }, 00:22:03.609 "dhchap_digests": [ 00:22:03.609 "sha256", 00:22:03.609 "sha384", 00:22:03.609 "sha512" 00:22:03.609 ], 00:22:03.609 "dhchap_dhgroups": [ 00:22:03.609 "null", 00:22:03.609 "ffdhe2048", 00:22:03.609 "ffdhe3072", 00:22:03.609 "ffdhe4096", 00:22:03.609 "ffdhe6144", 00:22:03.609 "ffdhe8192" 00:22:03.609 ] 00:22:03.609 } 00:22:03.609 }, 00:22:03.609 { 00:22:03.609 "method": "nvmf_set_max_subsystems", 00:22:03.609 "params": { 00:22:03.609 "max_subsystems": 1024 00:22:03.609 } 00:22:03.609 }, 00:22:03.609 { 00:22:03.609 "method": "nvmf_set_crdt", 00:22:03.609 "params": { 00:22:03.609 "crdt1": 0, 00:22:03.609 "crdt2": 0, 00:22:03.609 "crdt3": 0 00:22:03.609 } 00:22:03.609 }, 00:22:03.609 { 00:22:03.609 "method": "nvmf_create_transport", 00:22:03.609 "params": { 00:22:03.609 "trtype": "TCP", 00:22:03.609 "max_queue_depth": 128, 00:22:03.609 "max_io_qpairs_per_ctrlr": 127, 00:22:03.609 "in_capsule_data_size": 4096, 00:22:03.609 "max_io_size": 131072, 00:22:03.609 "io_unit_size": 131072, 00:22:03.609 "max_aq_depth": 128, 00:22:03.609 "num_shared_buffers": 511, 00:22:03.609 "buf_cache_size": 4294967295, 00:22:03.609 "dif_insert_or_strip": false, 00:22:03.609 "zcopy": false, 00:22:03.609 "c2h_success": false, 00:22:03.609 "sock_priority": 0, 00:22:03.609 "abort_timeout_sec": 1, 00:22:03.609 "ack_timeout": 0, 00:22:03.609 "data_wr_pool_size": 0 00:22:03.609 } 00:22:03.609 }, 00:22:03.609 { 00:22:03.609 "method": "nvmf_create_subsystem", 00:22:03.609 "params": { 00:22:03.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.609 "allow_any_host": false, 00:22:03.609 "serial_number": "00000000000000000000", 00:22:03.609 "model_number": "SPDK bdev Controller", 00:22:03.609 "max_namespaces": 32, 00:22:03.609 "min_cntlid": 1, 00:22:03.609 "max_cntlid": 65519, 00:22:03.609 "ana_reporting": false 00:22:03.609 } 00:22:03.609 }, 00:22:03.609 { 00:22:03.609 "method": "nvmf_subsystem_add_host", 00:22:03.609 "params": { 00:22:03.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.609 "host": "nqn.2016-06.io.spdk:host1", 00:22:03.609 "psk": "key0" 00:22:03.609 } 00:22:03.609 }, 00:22:03.609 { 00:22:03.609 "method": "nvmf_subsystem_add_ns", 00:22:03.609 "params": { 00:22:03.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.609 "namespace": { 00:22:03.609 "nsid": 1, 00:22:03.609 "bdev_name": "malloc0", 00:22:03.609 "nguid": "23D54889A7614E6097FFEA901BA0963D", 00:22:03.609 "uuid": "23d54889-a761-4e60-97ff-ea901ba0963d", 00:22:03.609 "no_auto_visible": false 00:22:03.609 } 00:22:03.609 } 00:22:03.609 }, 00:22:03.609 { 00:22:03.609 "method": "nvmf_subsystem_add_listener", 00:22:03.609 "params": { 00:22:03.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.609 "listen_address": { 00:22:03.609 "trtype": "TCP", 00:22:03.609 "adrfam": "IPv4", 00:22:03.609 "traddr": "10.0.0.2", 00:22:03.609 "trsvcid": "4420" 00:22:03.609 }, 00:22:03.609 "secure_channel": false, 00:22:03.609 "sock_impl": "ssl" 00:22:03.609 } 00:22:03.609 } 00:22:03.609 ] 00:22:03.609 } 00:22:03.609 ] 00:22:03.609 }' 00:22:03.609 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:03.870 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:03.870 "subsystems": [ 00:22:03.870 { 00:22:03.870 "subsystem": "keyring", 00:22:03.870 "config": [ 00:22:03.870 { 00:22:03.870 "method": "keyring_file_add_key", 00:22:03.870 "params": { 00:22:03.870 "name": "key0", 00:22:03.870 "path": "/tmp/tmp.qiYKqJAW4a" 00:22:03.870 } 00:22:03.870 } 00:22:03.870 ] 00:22:03.870 }, 00:22:03.870 { 00:22:03.870 "subsystem": "iobuf", 00:22:03.870 "config": [ 00:22:03.870 { 00:22:03.870 "method": "iobuf_set_options", 00:22:03.870 "params": { 00:22:03.870 "small_pool_count": 8192, 00:22:03.870 "large_pool_count": 1024, 00:22:03.870 "small_bufsize": 8192, 00:22:03.870 "large_bufsize": 135168, 00:22:03.870 "enable_numa": false 00:22:03.870 } 00:22:03.870 } 00:22:03.870 ] 00:22:03.870 }, 00:22:03.870 { 00:22:03.870 "subsystem": "sock", 00:22:03.870 "config": [ 00:22:03.870 { 00:22:03.870 "method": "sock_set_default_impl", 00:22:03.870 "params": { 00:22:03.870 "impl_name": "posix" 00:22:03.870 } 00:22:03.870 }, 00:22:03.870 { 00:22:03.870 "method": "sock_impl_set_options", 00:22:03.870 "params": { 00:22:03.870 "impl_name": "ssl", 00:22:03.870 "recv_buf_size": 4096, 00:22:03.870 "send_buf_size": 4096, 00:22:03.870 "enable_recv_pipe": true, 00:22:03.870 "enable_quickack": false, 00:22:03.870 "enable_placement_id": 0, 00:22:03.870 "enable_zerocopy_send_server": true, 00:22:03.870 "enable_zerocopy_send_client": false, 00:22:03.870 "zerocopy_threshold": 0, 00:22:03.870 "tls_version": 0, 00:22:03.870 "enable_ktls": false 00:22:03.870 } 00:22:03.870 }, 00:22:03.870 { 00:22:03.870 "method": "sock_impl_set_options", 00:22:03.870 "params": { 00:22:03.870 "impl_name": "posix", 00:22:03.870 "recv_buf_size": 2097152, 00:22:03.870 "send_buf_size": 2097152, 00:22:03.870 "enable_recv_pipe": true, 00:22:03.870 "enable_quickack": false, 00:22:03.870 "enable_placement_id": 0, 00:22:03.870 "enable_zerocopy_send_server": true, 00:22:03.870 "enable_zerocopy_send_client": false, 00:22:03.870 "zerocopy_threshold": 0, 00:22:03.870 "tls_version": 0, 00:22:03.870 "enable_ktls": false 00:22:03.870 } 00:22:03.870 } 00:22:03.870 ] 00:22:03.870 }, 00:22:03.870 { 00:22:03.870 "subsystem": "vmd", 00:22:03.870 "config": [] 00:22:03.870 }, 00:22:03.870 { 00:22:03.870 "subsystem": "accel", 00:22:03.870 "config": [ 00:22:03.870 { 00:22:03.870 "method": "accel_set_options", 00:22:03.870 "params": { 00:22:03.870 "small_cache_size": 128, 00:22:03.870 "large_cache_size": 16, 00:22:03.870 "task_count": 2048, 00:22:03.870 "sequence_count": 2048, 00:22:03.870 "buf_count": 2048 00:22:03.870 } 00:22:03.870 } 00:22:03.870 ] 00:22:03.870 }, 00:22:03.870 { 00:22:03.870 "subsystem": "bdev", 00:22:03.870 "config": [ 00:22:03.870 { 00:22:03.870 "method": "bdev_set_options", 00:22:03.870 "params": { 00:22:03.870 "bdev_io_pool_size": 65535, 00:22:03.870 "bdev_io_cache_size": 256, 00:22:03.870 "bdev_auto_examine": true, 00:22:03.871 "iobuf_small_cache_size": 128, 00:22:03.871 "iobuf_large_cache_size": 16 00:22:03.871 } 00:22:03.871 }, 00:22:03.871 { 00:22:03.871 "method": "bdev_raid_set_options", 00:22:03.871 "params": { 00:22:03.871 "process_window_size_kb": 1024, 00:22:03.871 "process_max_bandwidth_mb_sec": 0 00:22:03.871 } 00:22:03.871 }, 00:22:03.871 { 00:22:03.871 "method": "bdev_iscsi_set_options", 00:22:03.871 "params": { 00:22:03.871 "timeout_sec": 30 00:22:03.871 } 00:22:03.871 }, 00:22:03.871 { 00:22:03.871 "method": "bdev_nvme_set_options", 00:22:03.871 "params": { 00:22:03.871 "action_on_timeout": "none", 00:22:03.871 "timeout_us": 0, 00:22:03.871 "timeout_admin_us": 0, 00:22:03.871 "keep_alive_timeout_ms": 10000, 00:22:03.871 "arbitration_burst": 0, 00:22:03.871 "low_priority_weight": 0, 00:22:03.871 "medium_priority_weight": 0, 00:22:03.871 "high_priority_weight": 0, 00:22:03.871 "nvme_adminq_poll_period_us": 10000, 00:22:03.871 "nvme_ioq_poll_period_us": 0, 00:22:03.871 "io_queue_requests": 512, 00:22:03.871 "delay_cmd_submit": true, 00:22:03.871 "transport_retry_count": 4, 00:22:03.871 "bdev_retry_count": 3, 00:22:03.871 "transport_ack_timeout": 0, 00:22:03.871 "ctrlr_loss_timeout_sec": 0, 00:22:03.871 "reconnect_delay_sec": 0, 00:22:03.871 "fast_io_fail_timeout_sec": 0, 00:22:03.871 "disable_auto_failback": false, 00:22:03.871 "generate_uuids": false, 00:22:03.871 "transport_tos": 0, 00:22:03.871 "nvme_error_stat": false, 00:22:03.871 "rdma_srq_size": 0, 00:22:03.871 "io_path_stat": false, 00:22:03.871 "allow_accel_sequence": false, 00:22:03.871 "rdma_max_cq_size": 0, 00:22:03.871 "rdma_cm_event_timeout_ms": 0, 00:22:03.871 "dhchap_digests": [ 00:22:03.871 "sha256", 00:22:03.871 "sha384", 00:22:03.871 "sha512" 00:22:03.871 ], 00:22:03.871 "dhchap_dhgroups": [ 00:22:03.871 "null", 00:22:03.871 "ffdhe2048", 00:22:03.871 "ffdhe3072", 00:22:03.871 "ffdhe4096", 00:22:03.871 "ffdhe6144", 00:22:03.871 "ffdhe8192" 00:22:03.871 ] 00:22:03.871 } 00:22:03.871 }, 00:22:03.871 { 00:22:03.871 "method": "bdev_nvme_attach_controller", 00:22:03.871 "params": { 00:22:03.871 "name": "nvme0", 00:22:03.871 "trtype": "TCP", 00:22:03.871 "adrfam": "IPv4", 00:22:03.871 "traddr": "10.0.0.2", 00:22:03.871 "trsvcid": "4420", 00:22:03.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.871 "prchk_reftag": false, 00:22:03.871 "prchk_guard": false, 00:22:03.871 "ctrlr_loss_timeout_sec": 0, 00:22:03.871 "reconnect_delay_sec": 0, 00:22:03.871 "fast_io_fail_timeout_sec": 0, 00:22:03.871 "psk": "key0", 00:22:03.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.871 "hdgst": false, 00:22:03.871 "ddgst": false, 00:22:03.871 "multipath": "multipath" 00:22:03.871 } 00:22:03.871 }, 00:22:03.871 { 00:22:03.871 "method": "bdev_nvme_set_hotplug", 00:22:03.871 "params": { 00:22:03.871 "period_us": 100000, 00:22:03.871 "enable": false 00:22:03.871 } 00:22:03.871 }, 00:22:03.871 { 00:22:03.871 "method": "bdev_enable_histogram", 00:22:03.871 "params": { 00:22:03.871 "name": "nvme0n1", 00:22:03.871 "enable": true 00:22:03.871 } 00:22:03.871 }, 00:22:03.871 { 00:22:03.871 "method": "bdev_wait_for_examine" 00:22:03.871 } 00:22:03.871 ] 00:22:03.871 }, 00:22:03.871 { 00:22:03.871 "subsystem": "nbd", 00:22:03.871 "config": [] 00:22:03.871 } 00:22:03.871 ] 00:22:03.871 }' 00:22:03.871 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3906500 00:22:03.871 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3906500 ']' 00:22:03.871 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3906500 00:22:03.871 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:03.871 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.871 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3906500 00:22:03.871 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:03.871 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:03.871 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3906500' 00:22:03.871 killing process with pid 3906500 00:22:03.871 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3906500 00:22:03.871 Received shutdown signal, test time was about 1.000000 seconds 00:22:03.871 00:22:03.871 Latency(us) 00:22:03.871 [2024-11-27T08:54:19.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.871 [2024-11-27T08:54:19.337Z] =================================================================================================================== 00:22:03.871 [2024-11-27T08:54:19.337Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.871 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3906500 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3906369 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3906369 ']' 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3906369 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3906369 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3906369' 00:22:04.133 killing process with pid 3906369 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3906369 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3906369 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.133 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:04.133 "subsystems": [ 00:22:04.133 { 00:22:04.133 "subsystem": "keyring", 00:22:04.133 "config": [ 00:22:04.133 { 00:22:04.133 "method": "keyring_file_add_key", 00:22:04.133 "params": { 00:22:04.133 "name": "key0", 00:22:04.133 "path": "/tmp/tmp.qiYKqJAW4a" 00:22:04.133 } 00:22:04.133 } 00:22:04.133 ] 00:22:04.133 }, 00:22:04.133 { 00:22:04.133 "subsystem": "iobuf", 00:22:04.133 "config": [ 00:22:04.133 { 00:22:04.133 "method": "iobuf_set_options", 00:22:04.133 "params": { 00:22:04.133 "small_pool_count": 8192, 00:22:04.133 "large_pool_count": 1024, 00:22:04.133 "small_bufsize": 8192, 00:22:04.133 "large_bufsize": 135168, 00:22:04.133 "enable_numa": false 00:22:04.133 } 00:22:04.133 } 00:22:04.133 ] 00:22:04.133 }, 00:22:04.133 { 00:22:04.133 "subsystem": "sock", 00:22:04.133 "config": [ 00:22:04.133 { 00:22:04.133 "method": "sock_set_default_impl", 00:22:04.133 "params": { 00:22:04.133 "impl_name": "posix" 00:22:04.133 } 00:22:04.133 }, 00:22:04.133 { 00:22:04.133 "method": "sock_impl_set_options", 00:22:04.133 "params": { 00:22:04.133 "impl_name": "ssl", 00:22:04.133 "recv_buf_size": 4096, 00:22:04.133 "send_buf_size": 4096, 00:22:04.133 "enable_recv_pipe": true, 00:22:04.133 "enable_quickack": false, 00:22:04.133 "enable_placement_id": 0, 00:22:04.133 "enable_zerocopy_send_server": true, 00:22:04.133 "enable_zerocopy_send_client": false, 00:22:04.133 "zerocopy_threshold": 0, 00:22:04.133 "tls_version": 0, 00:22:04.133 "enable_ktls": false 00:22:04.133 } 00:22:04.133 }, 00:22:04.133 { 00:22:04.133 "method": "sock_impl_set_options", 00:22:04.133 "params": { 00:22:04.133 "impl_name": "posix", 00:22:04.133 "recv_buf_size": 2097152, 00:22:04.133 "send_buf_size": 2097152, 00:22:04.133 "enable_recv_pipe": true, 00:22:04.133 "enable_quickack": false, 00:22:04.133 "enable_placement_id": 0, 00:22:04.133 "enable_zerocopy_send_server": true, 00:22:04.133 "enable_zerocopy_send_client": false, 00:22:04.133 "zerocopy_threshold": 0, 00:22:04.133 "tls_version": 0, 00:22:04.133 "enable_ktls": false 00:22:04.133 } 00:22:04.133 } 00:22:04.133 ] 00:22:04.133 }, 00:22:04.133 { 00:22:04.133 "subsystem": "vmd", 00:22:04.133 "config": [] 00:22:04.133 }, 00:22:04.133 { 00:22:04.133 "subsystem": "accel", 00:22:04.133 "config": [ 00:22:04.133 { 00:22:04.133 "method": "accel_set_options", 00:22:04.133 "params": { 00:22:04.133 "small_cache_size": 128, 00:22:04.133 "large_cache_size": 16, 00:22:04.133 "task_count": 2048, 00:22:04.133 "sequence_count": 2048, 00:22:04.133 "buf_count": 2048 00:22:04.133 } 00:22:04.133 } 00:22:04.133 ] 00:22:04.133 }, 00:22:04.133 { 00:22:04.133 "subsystem": "bdev", 00:22:04.133 "config": [ 00:22:04.133 { 00:22:04.133 "method": "bdev_set_options", 00:22:04.133 "params": { 00:22:04.133 "bdev_io_pool_size": 65535, 00:22:04.133 "bdev_io_cache_size": 256, 00:22:04.133 "bdev_auto_examine": true, 00:22:04.133 "iobuf_small_cache_size": 128, 00:22:04.133 "iobuf_large_cache_size": 16 00:22:04.133 } 00:22:04.133 }, 00:22:04.133 { 00:22:04.133 "method": "bdev_raid_set_options", 00:22:04.133 "params": { 00:22:04.133 "process_window_size_kb": 1024, 00:22:04.133 "process_max_bandwidth_mb_sec": 0 00:22:04.133 } 00:22:04.133 }, 00:22:04.133 { 00:22:04.133 "method": "bdev_iscsi_set_options", 00:22:04.133 "params": { 00:22:04.133 "timeout_sec": 30 00:22:04.133 } 00:22:04.133 }, 00:22:04.133 { 00:22:04.133 "method": "bdev_nvme_set_options", 00:22:04.133 "params": { 00:22:04.133 "action_on_timeout": "none", 00:22:04.133 "timeout_us": 0, 00:22:04.133 "timeout_admin_us": 0, 00:22:04.133 "keep_alive_timeout_ms": 10000, 00:22:04.133 "arbitration_burst": 0, 00:22:04.133 "low_priority_weight": 0, 00:22:04.133 "medium_priority_weight": 0, 00:22:04.133 "high_priority_weight": 0, 00:22:04.133 "nvme_adminq_poll_period_us": 10000, 00:22:04.133 "nvme_ioq_poll_period_us": 0, 00:22:04.133 "io_queue_requests": 0, 00:22:04.133 "delay_cmd_submit": true, 00:22:04.133 "transport_retry_count": 4, 00:22:04.133 "bdev_retry_count": 3, 00:22:04.133 "transport_ack_timeout": 0, 00:22:04.133 "ctrlr_loss_timeout_sec": 0, 00:22:04.133 "reconnect_delay_sec": 0, 00:22:04.133 "fast_io_fail_timeout_sec": 0, 00:22:04.133 "disable_auto_failback": false, 00:22:04.133 "generate_uuids": false, 00:22:04.133 "transport_tos": 0, 00:22:04.133 "nvme_error_stat": false, 00:22:04.133 "rdma_srq_size": 0, 00:22:04.133 "io_path_stat": false, 00:22:04.133 "allow_accel_sequence": false, 00:22:04.133 "rdma_max_cq_size": 0, 00:22:04.133 "rdma_cm_event_timeout_ms": 0, 00:22:04.133 "dhchap_digests": [ 00:22:04.133 "sha256", 00:22:04.133 "sha384", 00:22:04.133 "sha512" 00:22:04.133 ], 00:22:04.133 "dhchap_dhgroups": [ 00:22:04.133 "null", 00:22:04.133 "ffdhe2048", 00:22:04.133 "ffdhe3072", 00:22:04.133 "ffdhe4096", 00:22:04.133 "ffdhe6144", 00:22:04.133 "ffdhe8192" 00:22:04.133 ] 00:22:04.133 } 00:22:04.133 }, 00:22:04.133 { 00:22:04.133 "method": "bdev_nvme_set_hotplug", 00:22:04.133 "params": { 00:22:04.133 "period_us": 100000, 00:22:04.133 "enable": false 00:22:04.133 } 00:22:04.133 }, 00:22:04.133 { 00:22:04.133 "method": "bdev_malloc_create", 00:22:04.133 "params": { 00:22:04.133 "name": "malloc0", 00:22:04.133 "num_blocks": 8192, 00:22:04.133 "block_size": 4096, 00:22:04.133 "physical_block_size": 4096, 00:22:04.133 "uuid": "23d54889-a761-4e60-97ff-ea901ba0963d", 00:22:04.133 "optimal_io_boundary": 0, 00:22:04.133 "md_size": 0, 00:22:04.133 "dif_type": 0, 00:22:04.133 "dif_is_head_of_md": false, 00:22:04.133 "dif_pi_format": 0 00:22:04.133 } 00:22:04.133 }, 00:22:04.133 { 00:22:04.133 "method": "bdev_wait_for_examine" 00:22:04.133 } 00:22:04.133 ] 00:22:04.133 }, 00:22:04.133 { 00:22:04.134 "subsystem": "nbd", 00:22:04.134 "config": [] 00:22:04.134 }, 00:22:04.134 { 00:22:04.134 "subsystem": "scheduler", 00:22:04.134 "config": [ 00:22:04.134 { 00:22:04.134 "method": "framework_set_scheduler", 00:22:04.134 "params": { 00:22:04.134 "name": "static" 00:22:04.134 } 00:22:04.134 } 00:22:04.134 ] 00:22:04.134 }, 00:22:04.134 { 00:22:04.134 "subsystem": "nvmf", 00:22:04.134 "config": [ 00:22:04.134 { 00:22:04.134 "method": "nvmf_set_config", 00:22:04.134 "params": { 00:22:04.134 "discovery_filter": "match_any", 00:22:04.134 "admin_cmd_passthru": { 00:22:04.134 "identify_ctrlr": false 00:22:04.134 }, 00:22:04.134 "dhchap_digests": [ 00:22:04.134 "sha256", 00:22:04.134 "sha384", 00:22:04.134 "sha512" 00:22:04.134 ], 00:22:04.134 "dhchap_dhgroups": [ 00:22:04.134 "null", 00:22:04.134 "ffdhe2048", 00:22:04.134 "ffdhe3072", 00:22:04.134 "ffdhe4096", 00:22:04.134 "ffdhe6144", 00:22:04.134 "ffdhe8192" 00:22:04.134 ] 00:22:04.134 } 00:22:04.134 }, 00:22:04.134 { 00:22:04.134 "method": "nvmf_set_max_subsystems", 00:22:04.134 "params": { 00:22:04.134 "max_subsystems": 1024 00:22:04.134 } 00:22:04.134 }, 00:22:04.134 { 00:22:04.134 "method": "nvmf_set_crdt", 00:22:04.134 "params": { 00:22:04.134 "crdt1": 0, 00:22:04.134 "crdt2": 0, 00:22:04.134 "crdt3": 0 00:22:04.134 } 00:22:04.134 }, 00:22:04.134 { 00:22:04.134 "method": "nvmf_create_transport", 00:22:04.134 "params": { 00:22:04.134 "trtype": "TCP", 00:22:04.134 "max_queue_depth": 128, 00:22:04.134 "max_io_qpairs_per_ctrlr": 127, 00:22:04.134 "in_capsule_data_size": 4096, 00:22:04.134 "max_io_size": 131072, 00:22:04.134 "io_unit_size": 131072, 00:22:04.134 "max_aq_depth": 128, 00:22:04.134 "num_shared_buffers": 511, 00:22:04.134 "buf_cache_size": 4294967295, 00:22:04.134 "dif_insert_or_strip": false, 00:22:04.134 "zcopy": false, 00:22:04.134 "c2h_success": false, 00:22:04.134 "sock_priority": 0, 00:22:04.134 "abort_timeout_sec": 1, 00:22:04.134 "ack_timeout": 0, 00:22:04.134 "data_wr_pool_size": 0 00:22:04.134 } 00:22:04.134 }, 00:22:04.134 { 00:22:04.134 "method": "nvmf_create_subsystem", 00:22:04.134 "params": { 00:22:04.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.134 "allow_any_host": false, 00:22:04.134 "serial_number": "00000000000000000000", 00:22:04.134 "model_number": "SPDK bdev Controller", 00:22:04.134 "max_namespaces": 32, 00:22:04.134 "min_cntlid": 1, 00:22:04.134 "max_cntlid": 65519, 00:22:04.134 "ana_reporting": false 00:22:04.134 } 00:22:04.134 }, 00:22:04.134 { 00:22:04.134 "method": "nvmf_subsystem_add_host", 00:22:04.134 "params": { 00:22:04.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.134 "host": "nqn.2016-06.io.spdk:host1", 00:22:04.134 "psk": "key0" 00:22:04.134 } 00:22:04.134 }, 00:22:04.134 { 00:22:04.134 "method": "nvmf_subsystem_add_ns", 00:22:04.134 "params": { 00:22:04.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.134 "namespace": { 00:22:04.134 "nsid": 1, 00:22:04.134 "bdev_name": "malloc0", 00:22:04.134 "nguid": "23D54889A7614E6097FFEA901BA0963D", 00:22:04.134 "uuid": "23d54889-a761-4e60-97ff-ea901ba0963d", 00:22:04.134 "no_auto_visible": false 00:22:04.134 } 00:22:04.134 } 00:22:04.134 }, 00:22:04.134 { 00:22:04.134 "method": "nvmf_subsystem_add_listener", 00:22:04.134 "params": { 00:22:04.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.134 "listen_address": { 00:22:04.134 "trtype": "TCP", 00:22:04.134 "adrfam": "IPv4", 00:22:04.134 "traddr": "10.0.0.2", 00:22:04.134 "trsvcid": "4420" 00:22:04.134 }, 00:22:04.134 "secure_channel": false, 00:22:04.134 "sock_impl": "ssl" 00:22:04.134 } 00:22:04.134 } 00:22:04.134 ] 00:22:04.134 } 00:22:04.134 ] 00:22:04.134 }' 00:22:04.134 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.134 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3907086 00:22:04.134 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3907086 00:22:04.134 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:04.134 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3907086 ']' 00:22:04.134 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.134 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.134 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.134 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.134 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.134 [2024-11-27 09:54:19.575500] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:22:04.134 [2024-11-27 09:54:19.575558] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.395 [2024-11-27 09:54:19.665574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.395 [2024-11-27 09:54:19.695268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.395 [2024-11-27 09:54:19.695297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.395 [2024-11-27 09:54:19.695303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.395 [2024-11-27 09:54:19.695307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.395 [2024-11-27 09:54:19.695312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.395 [2024-11-27 09:54:19.695801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.656 [2024-11-27 09:54:19.888578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.656 [2024-11-27 09:54:19.920612] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.656 [2024-11-27 09:54:19.920822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.918 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.918 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:04.918 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.918 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.918 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.179 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.179 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3907429 00:22:05.179 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3907429 /var/tmp/bdevperf.sock 00:22:05.179 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3907429 ']' 00:22:05.179 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.179 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.179 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.179 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:05.179 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.179 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.179 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:05.179 "subsystems": [ 00:22:05.179 { 00:22:05.179 "subsystem": "keyring", 00:22:05.179 "config": [ 00:22:05.179 { 00:22:05.179 "method": "keyring_file_add_key", 00:22:05.179 "params": { 00:22:05.179 "name": "key0", 00:22:05.179 "path": "/tmp/tmp.qiYKqJAW4a" 00:22:05.179 } 00:22:05.179 } 00:22:05.179 ] 00:22:05.179 }, 00:22:05.179 { 00:22:05.179 "subsystem": "iobuf", 00:22:05.179 "config": [ 00:22:05.179 { 00:22:05.179 "method": "iobuf_set_options", 00:22:05.179 "params": { 00:22:05.179 "small_pool_count": 8192, 00:22:05.179 "large_pool_count": 1024, 00:22:05.179 "small_bufsize": 8192, 00:22:05.179 "large_bufsize": 135168, 00:22:05.179 "enable_numa": false 00:22:05.179 } 00:22:05.179 } 00:22:05.179 ] 00:22:05.179 }, 00:22:05.179 { 00:22:05.179 "subsystem": "sock", 00:22:05.179 "config": [ 00:22:05.179 { 00:22:05.179 "method": "sock_set_default_impl", 00:22:05.179 "params": { 00:22:05.180 "impl_name": "posix" 00:22:05.180 } 00:22:05.180 }, 00:22:05.180 { 00:22:05.180 "method": "sock_impl_set_options", 00:22:05.180 "params": { 00:22:05.180 "impl_name": "ssl", 00:22:05.180 "recv_buf_size": 4096, 00:22:05.180 "send_buf_size": 4096, 00:22:05.180 "enable_recv_pipe": true, 00:22:05.180 "enable_quickack": false, 00:22:05.180 "enable_placement_id": 0, 00:22:05.180 "enable_zerocopy_send_server": true, 00:22:05.180 "enable_zerocopy_send_client": false, 00:22:05.180 "zerocopy_threshold": 0, 00:22:05.180 "tls_version": 0, 00:22:05.180 "enable_ktls": false 00:22:05.180 } 00:22:05.180 }, 00:22:05.180 { 00:22:05.180 "method": "sock_impl_set_options", 00:22:05.180 "params": { 00:22:05.180 "impl_name": "posix", 00:22:05.180 "recv_buf_size": 2097152, 00:22:05.180 "send_buf_size": 2097152, 00:22:05.180 "enable_recv_pipe": true, 00:22:05.180 "enable_quickack": false, 00:22:05.180 "enable_placement_id": 0, 00:22:05.180 "enable_zerocopy_send_server": true, 00:22:05.180 "enable_zerocopy_send_client": false, 00:22:05.180 "zerocopy_threshold": 0, 00:22:05.180 "tls_version": 0, 00:22:05.180 "enable_ktls": false 00:22:05.180 } 00:22:05.180 } 00:22:05.180 ] 00:22:05.180 }, 00:22:05.180 { 00:22:05.180 "subsystem": "vmd", 00:22:05.180 "config": [] 00:22:05.180 }, 00:22:05.180 { 00:22:05.180 "subsystem": "accel", 00:22:05.180 "config": [ 00:22:05.180 { 00:22:05.180 "method": "accel_set_options", 00:22:05.180 "params": { 00:22:05.180 "small_cache_size": 128, 00:22:05.180 "large_cache_size": 16, 00:22:05.180 "task_count": 2048, 00:22:05.180 "sequence_count": 2048, 00:22:05.180 "buf_count": 2048 00:22:05.180 } 00:22:05.180 } 00:22:05.180 ] 00:22:05.180 }, 00:22:05.180 { 00:22:05.180 "subsystem": "bdev", 00:22:05.180 "config": [ 00:22:05.180 { 00:22:05.180 "method": "bdev_set_options", 00:22:05.180 "params": { 00:22:05.180 "bdev_io_pool_size": 65535, 00:22:05.180 "bdev_io_cache_size": 256, 00:22:05.180 "bdev_auto_examine": true, 00:22:05.180 "iobuf_small_cache_size": 128, 00:22:05.180 "iobuf_large_cache_size": 16 00:22:05.180 } 00:22:05.180 }, 00:22:05.180 { 00:22:05.180 "method": "bdev_raid_set_options", 00:22:05.180 "params": { 00:22:05.180 "process_window_size_kb": 1024, 00:22:05.180 "process_max_bandwidth_mb_sec": 0 00:22:05.180 } 00:22:05.180 }, 00:22:05.180 { 00:22:05.180 "method": "bdev_iscsi_set_options", 00:22:05.180 "params": { 00:22:05.180 "timeout_sec": 30 00:22:05.180 } 00:22:05.180 }, 00:22:05.180 { 00:22:05.180 "method": "bdev_nvme_set_options", 00:22:05.180 "params": { 00:22:05.180 "action_on_timeout": "none", 00:22:05.180 "timeout_us": 0, 00:22:05.180 "timeout_admin_us": 0, 00:22:05.180 "keep_alive_timeout_ms": 10000, 00:22:05.180 "arbitration_burst": 0, 00:22:05.180 "low_priority_weight": 0, 00:22:05.180 "medium_priority_weight": 0, 00:22:05.180 "high_priority_weight": 0, 00:22:05.180 "nvme_adminq_poll_period_us": 10000, 00:22:05.180 "nvme_ioq_poll_period_us": 0, 00:22:05.180 "io_queue_requests": 512, 00:22:05.180 "delay_cmd_submit": true, 00:22:05.180 "transport_retry_count": 4, 00:22:05.180 "bdev_retry_count": 3, 00:22:05.180 "transport_ack_timeout": 0, 00:22:05.180 "ctrlr_loss_timeout_sec": 0, 00:22:05.180 "reconnect_delay_sec": 0, 00:22:05.180 "fast_io_fail_timeout_sec": 0, 00:22:05.180 "disable_auto_failback": false, 00:22:05.180 "generate_uuids": false, 00:22:05.180 "transport_tos": 0, 00:22:05.180 "nvme_error_stat": false, 00:22:05.180 "rdma_srq_size": 0, 00:22:05.180 "io_path_stat": false, 00:22:05.180 "allow_accel_sequence": false, 00:22:05.180 "rdma_max_cq_size": 0, 00:22:05.180 "rdma_cm_event_timeout_ms": 0, 00:22:05.180 "dhchap_digests": [ 00:22:05.180 "sha256", 00:22:05.180 "sha384", 00:22:05.180 "sha512" 00:22:05.180 ], 00:22:05.180 "dhchap_dhgroups": [ 00:22:05.180 "null", 00:22:05.180 "ffdhe2048", 00:22:05.180 "ffdhe3072", 00:22:05.180 "ffdhe4096", 00:22:05.180 "ffdhe6144", 00:22:05.180 "ffdhe8192" 00:22:05.180 ] 00:22:05.180 } 00:22:05.180 }, 00:22:05.180 { 00:22:05.180 "method": "bdev_nvme_attach_controller", 00:22:05.180 "params": { 00:22:05.180 "name": "nvme0", 00:22:05.180 "trtype": "TCP", 00:22:05.180 "adrfam": "IPv4", 00:22:05.180 "traddr": "10.0.0.2", 00:22:05.180 "trsvcid": "4420", 00:22:05.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.180 "prchk_reftag": false, 00:22:05.180 "prchk_guard": false, 00:22:05.180 "ctrlr_loss_timeout_sec": 0, 00:22:05.180 "reconnect_delay_sec": 0, 00:22:05.180 "fast_io_fail_timeout_sec": 0, 00:22:05.180 "psk": "key0", 00:22:05.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:05.180 "hdgst": false, 00:22:05.180 "ddgst": false, 00:22:05.180 "multipath": "multipath" 00:22:05.180 } 00:22:05.180 }, 00:22:05.180 { 00:22:05.180 "method": "bdev_nvme_set_hotplug", 00:22:05.180 "params": { 00:22:05.180 "period_us": 100000, 00:22:05.180 "enable": false 00:22:05.180 } 00:22:05.180 }, 00:22:05.180 { 00:22:05.180 "method": "bdev_enable_histogram", 00:22:05.180 "params": { 00:22:05.180 "name": "nvme0n1", 00:22:05.180 "enable": true 00:22:05.180 } 00:22:05.180 }, 00:22:05.180 { 00:22:05.180 "method": "bdev_wait_for_examine" 00:22:05.180 } 00:22:05.180 ] 00:22:05.180 }, 00:22:05.180 { 00:22:05.180 "subsystem": "nbd", 00:22:05.180 "config": [] 00:22:05.180 } 00:22:05.180 ] 00:22:05.180 }' 00:22:05.180 [2024-11-27 09:54:20.448508] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:22:05.180 [2024-11-27 09:54:20.448562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3907429 ] 00:22:05.180 [2024-11-27 09:54:20.532586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.180 [2024-11-27 09:54:20.562444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.442 [2024-11-27 09:54:20.697313] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:06.015 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.015 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:06.015 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:06.015 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:06.015 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.015 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:06.276 Running I/O for 1 seconds... 00:22:07.220 5206.00 IOPS, 20.34 MiB/s 00:22:07.220 Latency(us) 00:22:07.220 [2024-11-27T08:54:22.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.220 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:07.220 Verification LBA range: start 0x0 length 0x2000 00:22:07.220 nvme0n1 : 1.05 5086.79 19.87 0.00 0.00 24650.77 6171.31 46967.47 00:22:07.220 [2024-11-27T08:54:22.686Z] =================================================================================================================== 00:22:07.220 [2024-11-27T08:54:22.686Z] Total : 5086.79 19.87 0.00 0.00 24650.77 6171.31 46967.47 00:22:07.220 { 00:22:07.220 "results": [ 00:22:07.220 { 00:22:07.220 "job": "nvme0n1", 00:22:07.220 "core_mask": "0x2", 00:22:07.220 "workload": "verify", 00:22:07.220 "status": "finished", 00:22:07.220 "verify_range": { 00:22:07.220 "start": 0, 00:22:07.220 "length": 8192 00:22:07.220 }, 00:22:07.220 "queue_depth": 128, 00:22:07.220 "io_size": 4096, 00:22:07.220 "runtime": 1.048599, 00:22:07.220 "iops": 5086.787227529303, 00:22:07.220 "mibps": 19.87026260753634, 00:22:07.220 "io_failed": 0, 00:22:07.220 "io_timeout": 0, 00:22:07.220 "avg_latency_us": 24650.768253968254, 00:22:07.220 "min_latency_us": 6171.306666666666, 00:22:07.220 "max_latency_us": 46967.46666666667 00:22:07.220 } 00:22:07.220 ], 00:22:07.220 "core_count": 1 00:22:07.220 } 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:07.220 nvmf_trace.0 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3907429 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3907429 ']' 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3907429 00:22:07.220 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:07.480 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.480 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3907429 00:22:07.480 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3907429' 00:22:07.481 killing process with pid 3907429 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3907429 00:22:07.481 Received shutdown signal, test time was about 1.000000 seconds 00:22:07.481 00:22:07.481 Latency(us) 00:22:07.481 [2024-11-27T08:54:22.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.481 [2024-11-27T08:54:22.947Z] =================================================================================================================== 00:22:07.481 [2024-11-27T08:54:22.947Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3907429 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:07.481 rmmod nvme_tcp 00:22:07.481 rmmod nvme_fabrics 00:22:07.481 rmmod nvme_keyring 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3907086 ']' 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3907086 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3907086 ']' 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3907086 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.481 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3907086 00:22:07.742 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:07.742 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:07.742 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3907086' 00:22:07.742 killing process with pid 3907086 00:22:07.742 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3907086 00:22:07.742 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3907086 00:22:07.742 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:07.742 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:07.742 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:07.742 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:07.742 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:22:07.742 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:07.742 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:22:07.742 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:07.742 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:07.742 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.742 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.742 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TYXjzlweq8 /tmp/tmp.pSqqic9vX0 /tmp/tmp.qiYKqJAW4a 00:22:10.288 00:22:10.288 real 1m28.007s 00:22:10.288 user 2m18.059s 00:22:10.288 sys 0m27.272s 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.288 ************************************ 00:22:10.288 END TEST nvmf_tls 00:22:10.288 ************************************ 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:10.288 ************************************ 00:22:10.288 START TEST nvmf_fips 00:22:10.288 ************************************ 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:10.288 * Looking for test storage... 00:22:10.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:10.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.288 --rc genhtml_branch_coverage=1 00:22:10.288 --rc genhtml_function_coverage=1 00:22:10.288 --rc genhtml_legend=1 00:22:10.288 --rc geninfo_all_blocks=1 00:22:10.288 --rc geninfo_unexecuted_blocks=1 00:22:10.288 00:22:10.288 ' 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:10.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.288 --rc genhtml_branch_coverage=1 00:22:10.288 --rc genhtml_function_coverage=1 00:22:10.288 --rc genhtml_legend=1 00:22:10.288 --rc geninfo_all_blocks=1 00:22:10.288 --rc geninfo_unexecuted_blocks=1 00:22:10.288 00:22:10.288 ' 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:10.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.288 --rc genhtml_branch_coverage=1 00:22:10.288 --rc genhtml_function_coverage=1 00:22:10.288 --rc genhtml_legend=1 00:22:10.288 --rc geninfo_all_blocks=1 00:22:10.288 --rc geninfo_unexecuted_blocks=1 00:22:10.288 00:22:10.288 ' 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:10.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.288 --rc genhtml_branch_coverage=1 00:22:10.288 --rc genhtml_function_coverage=1 00:22:10.288 --rc genhtml_legend=1 00:22:10.288 --rc geninfo_all_blocks=1 00:22:10.288 --rc geninfo_unexecuted_blocks=1 00:22:10.288 00:22:10.288 ' 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.288 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:10.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:22:10.289 Error setting digest 00:22:10.289 40B2027CF67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:10.289 40B2027CF67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.289 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:18.431 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:18.431 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:18.431 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:18.431 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:18.431 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.432 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:18.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:22:18.432 00:22:18.432 --- 10.0.0.2 ping statistics --- 00:22:18.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.432 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:22:18.432 00:22:18.432 --- 10.0.0.1 ping statistics --- 00:22:18.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.432 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3912155 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3912155 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3912155 ']' 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.432 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:18.432 [2024-11-27 09:54:33.260166] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:22:18.432 [2024-11-27 09:54:33.260238] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.432 [2024-11-27 09:54:33.359104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.432 [2024-11-27 09:54:33.408465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.432 [2024-11-27 09:54:33.408514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.432 [2024-11-27 09:54:33.408523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.432 [2024-11-27 09:54:33.408531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.432 [2024-11-27 09:54:33.408537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.432 [2024-11-27 09:54:33.409300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.tih 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.tih 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.tih 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.tih 00:22:18.693 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:18.954 [2024-11-27 09:54:34.272845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.954 [2024-11-27 09:54:34.288838] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:18.954 [2024-11-27 09:54:34.289156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.954 malloc0 00:22:18.954 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:18.954 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3912381 00:22:18.954 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3912381 /var/tmp/bdevperf.sock 00:22:18.954 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:18.954 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3912381 ']' 00:22:18.954 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.954 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.954 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.954 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.954 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:19.214 [2024-11-27 09:54:34.431703] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:22:19.214 [2024-11-27 09:54:34.431777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3912381 ] 00:22:19.214 [2024-11-27 09:54:34.522092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.214 [2024-11-27 09:54:34.573191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.786 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.786 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:19.786 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.tih 00:22:20.047 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:20.307 [2024-11-27 09:54:35.586909] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.307 TLSTESTn1 00:22:20.307 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:20.568 Running I/O for 10 seconds... 00:22:22.450 3610.00 IOPS, 14.10 MiB/s [2024-11-27T08:54:38.856Z] 4418.50 IOPS, 17.26 MiB/s [2024-11-27T08:54:40.238Z] 5018.67 IOPS, 19.60 MiB/s [2024-11-27T08:54:40.809Z] 5338.75 IOPS, 20.85 MiB/s [2024-11-27T08:54:42.194Z] 5409.20 IOPS, 21.13 MiB/s [2024-11-27T08:54:43.196Z] 5573.67 IOPS, 21.77 MiB/s [2024-11-27T08:54:44.138Z] 5569.29 IOPS, 21.76 MiB/s [2024-11-27T08:54:45.078Z] 5689.50 IOPS, 22.22 MiB/s [2024-11-27T08:54:46.019Z] 5782.33 IOPS, 22.59 MiB/s [2024-11-27T08:54:46.019Z] 5863.50 IOPS, 22.90 MiB/s 00:22:30.553 Latency(us) 00:22:30.553 [2024-11-27T08:54:46.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.553 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:30.553 Verification LBA range: start 0x0 length 0x2000 00:22:30.553 TLSTESTn1 : 10.03 5861.21 22.90 0.00 0.00 21790.90 6471.68 30146.56 00:22:30.553 [2024-11-27T08:54:46.019Z] =================================================================================================================== 00:22:30.553 [2024-11-27T08:54:46.019Z] Total : 5861.21 22.90 0.00 0.00 21790.90 6471.68 30146.56 00:22:30.553 { 00:22:30.553 "results": [ 00:22:30.553 { 00:22:30.553 "job": "TLSTESTn1", 00:22:30.553 "core_mask": "0x4", 00:22:30.553 "workload": "verify", 00:22:30.553 "status": "finished", 00:22:30.553 "verify_range": { 00:22:30.553 "start": 0, 00:22:30.553 "length": 8192 00:22:30.553 }, 00:22:30.553 "queue_depth": 128, 00:22:30.553 "io_size": 4096, 00:22:30.553 "runtime": 10.025411, 00:22:30.553 "iops": 5861.206089206717, 00:22:30.553 "mibps": 22.895336285963737, 00:22:30.553 "io_failed": 0, 00:22:30.553 "io_timeout": 0, 00:22:30.553 "avg_latency_us": 21790.902165268348, 00:22:30.553 "min_latency_us": 6471.68, 00:22:30.553 "max_latency_us": 30146.56 00:22:30.553 } 00:22:30.553 ], 00:22:30.553 "core_count": 1 00:22:30.553 } 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:30.553 nvmf_trace.0 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3912381 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3912381 ']' 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3912381 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.553 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3912381 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3912381' 00:22:30.815 killing process with pid 3912381 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3912381 00:22:30.815 Received shutdown signal, test time was about 10.000000 seconds 00:22:30.815 00:22:30.815 Latency(us) 00:22:30.815 [2024-11-27T08:54:46.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.815 [2024-11-27T08:54:46.281Z] =================================================================================================================== 00:22:30.815 [2024-11-27T08:54:46.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3912381 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:30.815 rmmod nvme_tcp 00:22:30.815 rmmod nvme_fabrics 00:22:30.815 rmmod nvme_keyring 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3912155 ']' 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3912155 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3912155 ']' 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3912155 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3912155 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3912155' 00:22:30.815 killing process with pid 3912155 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3912155 00:22:30.815 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3912155 00:22:31.076 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.076 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.076 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.076 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:31.076 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.076 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:31.076 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.076 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.076 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.076 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.076 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.076 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.622 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:33.622 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.tih 00:22:33.622 00:22:33.622 real 0m23.217s 00:22:33.622 user 0m25.064s 00:22:33.622 sys 0m9.501s 00:22:33.622 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:33.623 ************************************ 00:22:33.623 END TEST nvmf_fips 00:22:33.623 ************************************ 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:33.623 ************************************ 00:22:33.623 START TEST nvmf_control_msg_list 00:22:33.623 ************************************ 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:33.623 * Looking for test storage... 00:22:33.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:33.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.623 --rc genhtml_branch_coverage=1 00:22:33.623 --rc genhtml_function_coverage=1 00:22:33.623 --rc genhtml_legend=1 00:22:33.623 --rc geninfo_all_blocks=1 00:22:33.623 --rc geninfo_unexecuted_blocks=1 00:22:33.623 00:22:33.623 ' 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:33.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.623 --rc genhtml_branch_coverage=1 00:22:33.623 --rc genhtml_function_coverage=1 00:22:33.623 --rc genhtml_legend=1 00:22:33.623 --rc geninfo_all_blocks=1 00:22:33.623 --rc geninfo_unexecuted_blocks=1 00:22:33.623 00:22:33.623 ' 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:33.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.623 --rc genhtml_branch_coverage=1 00:22:33.623 --rc genhtml_function_coverage=1 00:22:33.623 --rc genhtml_legend=1 00:22:33.623 --rc geninfo_all_blocks=1 00:22:33.623 --rc geninfo_unexecuted_blocks=1 00:22:33.623 00:22:33.623 ' 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:33.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.623 --rc genhtml_branch_coverage=1 00:22:33.623 --rc genhtml_function_coverage=1 00:22:33.623 --rc genhtml_legend=1 00:22:33.623 --rc geninfo_all_blocks=1 00:22:33.623 --rc geninfo_unexecuted_blocks=1 00:22:33.623 00:22:33.623 ' 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.623 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:33.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:33.624 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:41.768 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:41.768 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:41.768 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:41.768 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:41.768 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.768 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.768 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.768 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.768 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:41.768 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.768 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.768 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.768 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:41.768 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:41.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:22:41.768 00:22:41.769 --- 10.0.0.2 ping statistics --- 00:22:41.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.769 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:22:41.769 00:22:41.769 --- 10.0.0.1 ping statistics --- 00:22:41.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.769 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3918867 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3918867 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3918867 ']' 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.769 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:41.769 [2024-11-27 09:54:56.394927] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:22:41.769 [2024-11-27 09:54:56.394995] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.769 [2024-11-27 09:54:56.492335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.769 [2024-11-27 09:54:56.542859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.769 [2024-11-27 09:54:56.542910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.769 [2024-11-27 09:54:56.542918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.769 [2024-11-27 09:54:56.542926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.769 [2024-11-27 09:54:56.542932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.769 [2024-11-27 09:54:56.543716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.769 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.769 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:41.769 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:41.769 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.769 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:42.031 [2024-11-27 09:54:57.261481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:42.031 Malloc0 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:42.031 [2024-11-27 09:54:57.315772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3919053 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3919055 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3919056 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3919053 00:22:42.031 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:42.031 [2024-11-27 09:54:57.406336] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:42.031 [2024-11-27 09:54:57.416583] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:42.031 [2024-11-27 09:54:57.416988] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:43.417 Initializing NVMe Controllers 00:22:43.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:43.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:43.417 Initialization complete. Launching workers. 00:22:43.417 ======================================================== 00:22:43.417 Latency(us) 00:22:43.417 Device Information : IOPS MiB/s Average min max 00:22:43.417 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1519.00 5.93 658.07 282.61 817.51 00:22:43.417 ======================================================== 00:22:43.417 Total : 1519.00 5.93 658.07 282.61 817.51 00:22:43.417 00:22:43.417 [2024-11-27 09:54:58.480093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cee00 is same with the state(6) to be set 00:22:43.417 Initializing NVMe Controllers 00:22:43.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:43.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:43.417 Initialization complete. Launching workers. 00:22:43.417 ======================================================== 00:22:43.417 Latency(us) 00:22:43.417 Device Information : IOPS MiB/s Average min max 00:22:43.417 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40929.90 40688.23 41792.11 00:22:43.417 ======================================================== 00:22:43.417 Total : 25.00 0.10 40929.90 40688.23 41792.11 00:22:43.417 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3919055 00:22:43.417 Initializing NVMe Controllers 00:22:43.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:43.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:43.417 Initialization complete. Launching workers. 00:22:43.417 ======================================================== 00:22:43.417 Latency(us) 00:22:43.417 Device Information : IOPS MiB/s Average min max 00:22:43.417 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40925.07 40809.70 41369.64 00:22:43.417 ======================================================== 00:22:43.417 Total : 25.00 0.10 40925.07 40809.70 41369.64 00:22:43.417 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3919056 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:43.417 rmmod nvme_tcp 00:22:43.417 rmmod nvme_fabrics 00:22:43.417 rmmod nvme_keyring 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3918867 ']' 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3918867 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3918867 ']' 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3918867 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3918867 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3918867' 00:22:43.417 killing process with pid 3918867 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3918867 00:22:43.417 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3918867 00:22:43.678 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:43.678 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:43.678 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:43.678 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:43.678 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:43.678 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:43.678 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:43.678 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:43.678 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:43.678 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.678 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.678 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.591 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:45.591 00:22:45.591 real 0m12.482s 00:22:45.591 user 0m7.964s 00:22:45.591 sys 0m6.633s 00:22:45.591 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.591 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:45.591 ************************************ 00:22:45.591 END TEST nvmf_control_msg_list 00:22:45.591 ************************************ 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:45.852 ************************************ 00:22:45.852 START TEST nvmf_wait_for_buf 00:22:45.852 ************************************ 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:45.852 * Looking for test storage... 00:22:45.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.852 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:46.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.114 --rc genhtml_branch_coverage=1 00:22:46.114 --rc genhtml_function_coverage=1 00:22:46.114 --rc genhtml_legend=1 00:22:46.114 --rc geninfo_all_blocks=1 00:22:46.114 --rc geninfo_unexecuted_blocks=1 00:22:46.114 00:22:46.114 ' 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:46.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.114 --rc genhtml_branch_coverage=1 00:22:46.114 --rc genhtml_function_coverage=1 00:22:46.114 --rc genhtml_legend=1 00:22:46.114 --rc geninfo_all_blocks=1 00:22:46.114 --rc geninfo_unexecuted_blocks=1 00:22:46.114 00:22:46.114 ' 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:46.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.114 --rc genhtml_branch_coverage=1 00:22:46.114 --rc genhtml_function_coverage=1 00:22:46.114 --rc genhtml_legend=1 00:22:46.114 --rc geninfo_all_blocks=1 00:22:46.114 --rc geninfo_unexecuted_blocks=1 00:22:46.114 00:22:46.114 ' 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:46.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.114 --rc genhtml_branch_coverage=1 00:22:46.114 --rc genhtml_function_coverage=1 00:22:46.114 --rc genhtml_legend=1 00:22:46.114 --rc geninfo_all_blocks=1 00:22:46.114 --rc geninfo_unexecuted_blocks=1 00:22:46.114 00:22:46.114 ' 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.114 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:46.115 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:54.258 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:54.258 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:54.258 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:54.258 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.258 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:54.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:22:54.259 00:22:54.259 --- 10.0.0.2 ping statistics --- 00:22:54.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.259 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:22:54.259 00:22:54.259 --- 10.0.0.1 ping statistics --- 00:22:54.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.259 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3923550 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3923550 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3923550 ']' 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.259 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:54.259 [2024-11-27 09:55:08.971255] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:22:54.259 [2024-11-27 09:55:08.971322] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.259 [2024-11-27 09:55:09.069277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.259 [2024-11-27 09:55:09.119170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.259 [2024-11-27 09:55:09.119220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.259 [2024-11-27 09:55:09.119228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.259 [2024-11-27 09:55:09.119235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.259 [2024-11-27 09:55:09.119241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.259 [2024-11-27 09:55:09.119989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:54.523 Malloc0 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:54.523 [2024-11-27 09:55:09.941467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:54.523 [2024-11-27 09:55:09.977796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.523 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:54.784 [2024-11-27 09:55:10.083315] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:56.168 Initializing NVMe Controllers 00:22:56.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:56.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:56.168 Initialization complete. Launching workers. 00:22:56.168 ======================================================== 00:22:56.168 Latency(us) 00:22:56.168 Device Information : IOPS MiB/s Average min max 00:22:56.168 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.78 8009.97 64851.73 00:22:56.168 ======================================================== 00:22:56.168 Total : 129.00 16.12 32294.78 8009.97 64851.73 00:22:56.168 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:56.168 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:56.169 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:56.169 rmmod nvme_tcp 00:22:56.169 rmmod nvme_fabrics 00:22:56.169 rmmod nvme_keyring 00:22:56.169 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:56.169 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:56.169 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:56.169 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3923550 ']' 00:22:56.169 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3923550 00:22:56.169 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3923550 ']' 00:22:56.169 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3923550 00:22:56.169 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:56.169 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.169 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3923550 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3923550' 00:22:56.430 killing process with pid 3923550 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3923550 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3923550 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.430 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.980 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:58.980 00:22:58.980 real 0m12.774s 00:22:58.980 user 0m5.070s 00:22:58.980 sys 0m6.292s 00:22:58.980 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:58.980 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:58.980 ************************************ 00:22:58.980 END TEST nvmf_wait_for_buf 00:22:58.980 ************************************ 00:22:58.980 09:55:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:58.980 09:55:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:58.980 09:55:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:58.980 09:55:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:58.980 09:55:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:58.980 09:55:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:07.125 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:07.125 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:07.125 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:07.125 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:07.125 ************************************ 00:23:07.125 START TEST nvmf_perf_adq 00:23:07.125 ************************************ 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:07.125 * Looking for test storage... 00:23:07.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:07.125 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:07.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.126 --rc genhtml_branch_coverage=1 00:23:07.126 --rc genhtml_function_coverage=1 00:23:07.126 --rc genhtml_legend=1 00:23:07.126 --rc geninfo_all_blocks=1 00:23:07.126 --rc geninfo_unexecuted_blocks=1 00:23:07.126 00:23:07.126 ' 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:07.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.126 --rc genhtml_branch_coverage=1 00:23:07.126 --rc genhtml_function_coverage=1 00:23:07.126 --rc genhtml_legend=1 00:23:07.126 --rc geninfo_all_blocks=1 00:23:07.126 --rc geninfo_unexecuted_blocks=1 00:23:07.126 00:23:07.126 ' 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:07.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.126 --rc genhtml_branch_coverage=1 00:23:07.126 --rc genhtml_function_coverage=1 00:23:07.126 --rc genhtml_legend=1 00:23:07.126 --rc geninfo_all_blocks=1 00:23:07.126 --rc geninfo_unexecuted_blocks=1 00:23:07.126 00:23:07.126 ' 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:07.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.126 --rc genhtml_branch_coverage=1 00:23:07.126 --rc genhtml_function_coverage=1 00:23:07.126 --rc genhtml_legend=1 00:23:07.126 --rc geninfo_all_blocks=1 00:23:07.126 --rc geninfo_unexecuted_blocks=1 00:23:07.126 00:23:07.126 ' 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.126 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:13.834 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.834 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:13.835 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:13.835 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:13.835 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:13.835 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:14.850 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:17.394 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:22.690 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:22.690 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.690 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:22.691 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:22.691 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:22.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:23:22.691 00:23:22.691 --- 10.0.0.2 ping statistics --- 00:23:22.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.691 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:23:22.691 00:23:22.691 --- 10.0.0.1 ping statistics --- 00:23:22.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.691 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3933789 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3933789 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3933789 ']' 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.691 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.691 [2024-11-27 09:55:37.703265] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:23:22.691 [2024-11-27 09:55:37.703330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.691 [2024-11-27 09:55:37.801640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:22.691 [2024-11-27 09:55:37.856861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.691 [2024-11-27 09:55:37.856915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.691 [2024-11-27 09:55:37.856925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.691 [2024-11-27 09:55:37.856932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.691 [2024-11-27 09:55:37.856939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.691 [2024-11-27 09:55:37.859291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.691 [2024-11-27 09:55:37.859451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.691 [2024-11-27 09:55:37.859612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.691 [2024-11-27 09:55:37.859612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.264 [2024-11-27 09:55:38.715578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.264 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.526 Malloc1 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.526 [2024-11-27 09:55:38.794442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3933984 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:23:23.526 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:25.443 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:23:25.443 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.443 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:25.443 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.443 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:23:25.443 "tick_rate": 2400000000, 00:23:25.443 "poll_groups": [ 00:23:25.443 { 00:23:25.443 "name": "nvmf_tgt_poll_group_000", 00:23:25.443 "admin_qpairs": 1, 00:23:25.443 "io_qpairs": 1, 00:23:25.443 "current_admin_qpairs": 1, 00:23:25.443 "current_io_qpairs": 1, 00:23:25.443 "pending_bdev_io": 0, 00:23:25.443 "completed_nvme_io": 15549, 00:23:25.443 "transports": [ 00:23:25.443 { 00:23:25.443 "trtype": "TCP" 00:23:25.443 } 00:23:25.443 ] 00:23:25.443 }, 00:23:25.443 { 00:23:25.443 "name": "nvmf_tgt_poll_group_001", 00:23:25.443 "admin_qpairs": 0, 00:23:25.443 "io_qpairs": 1, 00:23:25.443 "current_admin_qpairs": 0, 00:23:25.443 "current_io_qpairs": 1, 00:23:25.443 "pending_bdev_io": 0, 00:23:25.443 "completed_nvme_io": 15936, 00:23:25.443 "transports": [ 00:23:25.443 { 00:23:25.443 "trtype": "TCP" 00:23:25.443 } 00:23:25.443 ] 00:23:25.443 }, 00:23:25.443 { 00:23:25.443 "name": "nvmf_tgt_poll_group_002", 00:23:25.443 "admin_qpairs": 0, 00:23:25.443 "io_qpairs": 1, 00:23:25.443 "current_admin_qpairs": 0, 00:23:25.443 "current_io_qpairs": 1, 00:23:25.443 "pending_bdev_io": 0, 00:23:25.443 "completed_nvme_io": 16221, 00:23:25.443 "transports": [ 00:23:25.443 { 00:23:25.443 "trtype": "TCP" 00:23:25.443 } 00:23:25.443 ] 00:23:25.443 }, 00:23:25.443 { 00:23:25.443 "name": "nvmf_tgt_poll_group_003", 00:23:25.443 "admin_qpairs": 0, 00:23:25.443 "io_qpairs": 1, 00:23:25.443 "current_admin_qpairs": 0, 00:23:25.443 "current_io_qpairs": 1, 00:23:25.443 "pending_bdev_io": 0, 00:23:25.443 "completed_nvme_io": 15740, 00:23:25.443 "transports": [ 00:23:25.443 { 00:23:25.443 "trtype": "TCP" 00:23:25.443 } 00:23:25.443 ] 00:23:25.443 } 00:23:25.443 ] 00:23:25.443 }' 00:23:25.443 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:25.443 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:23:25.443 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:23:25.443 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:23:25.443 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3933984 00:23:33.578 Initializing NVMe Controllers 00:23:33.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:33.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:33.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:33.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:33.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:33.578 Initialization complete. Launching workers. 00:23:33.578 ======================================================== 00:23:33.578 Latency(us) 00:23:33.578 Device Information : IOPS MiB/s Average min max 00:23:33.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11953.60 46.69 5354.18 1217.22 11778.93 00:23:33.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12829.20 50.11 4989.44 1089.26 13328.95 00:23:33.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13249.00 51.75 4830.09 1291.38 13479.31 00:23:33.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12078.50 47.18 5299.67 1263.61 13923.44 00:23:33.578 ======================================================== 00:23:33.578 Total : 50110.30 195.74 5109.09 1089.26 13923.44 00:23:33.578 00:23:33.578 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:33.578 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:33.579 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:33.579 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.579 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:33.579 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.579 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.579 rmmod nvme_tcp 00:23:33.579 rmmod nvme_fabrics 00:23:33.579 rmmod nvme_keyring 00:23:33.579 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.579 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:33.579 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:33.579 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3933789 ']' 00:23:33.579 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3933789 00:23:33.579 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3933789 ']' 00:23:33.579 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3933789 00:23:33.579 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:33.579 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.579 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3933789 00:23:33.840 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.840 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3933789' 00:23:33.841 killing process with pid 3933789 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3933789 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3933789 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.841 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.387 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:36.387 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:36.387 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:36.387 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:37.346 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:39.893 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:45.184 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:45.185 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:45.185 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:45.185 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:45.185 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:45.185 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:45.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:23:45.185 00:23:45.185 --- 10.0.0.2 ping statistics --- 00:23:45.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.185 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:23:45.185 00:23:45.185 --- 10.0.0.1 ping statistics --- 00:23:45.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.185 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:45.185 net.core.busy_poll = 1 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:45.185 net.core.busy_read = 1 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3938614 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3938614 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3938614 ']' 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.185 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.186 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.186 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.186 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.186 [2024-11-27 09:56:00.464688] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:23:45.186 [2024-11-27 09:56:00.464751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.186 [2024-11-27 09:56:00.566330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:45.186 [2024-11-27 09:56:00.619575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.186 [2024-11-27 09:56:00.619632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.186 [2024-11-27 09:56:00.619641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.186 [2024-11-27 09:56:00.619649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.186 [2024-11-27 09:56:00.619655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.186 [2024-11-27 09:56:00.622054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.186 [2024-11-27 09:56:00.622217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.186 [2024-11-27 09:56:00.622308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:45.186 [2024-11-27 09:56:00.622310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.130 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.130 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:46.130 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.130 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.130 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:46.131 [2024-11-27 09:56:01.483077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:46.131 Malloc1 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:46.131 [2024-11-27 09:56:01.562471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3938764 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:46.131 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:48.677 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:48.677 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.677 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:48.677 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.677 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:48.677 "tick_rate": 2400000000, 00:23:48.677 "poll_groups": [ 00:23:48.677 { 00:23:48.677 "name": "nvmf_tgt_poll_group_000", 00:23:48.677 "admin_qpairs": 1, 00:23:48.677 "io_qpairs": 4, 00:23:48.677 "current_admin_qpairs": 1, 00:23:48.677 "current_io_qpairs": 4, 00:23:48.677 "pending_bdev_io": 0, 00:23:48.677 "completed_nvme_io": 35054, 00:23:48.677 "transports": [ 00:23:48.677 { 00:23:48.677 "trtype": "TCP" 00:23:48.677 } 00:23:48.677 ] 00:23:48.677 }, 00:23:48.677 { 00:23:48.677 "name": "nvmf_tgt_poll_group_001", 00:23:48.677 "admin_qpairs": 0, 00:23:48.677 "io_qpairs": 0, 00:23:48.677 "current_admin_qpairs": 0, 00:23:48.677 "current_io_qpairs": 0, 00:23:48.677 "pending_bdev_io": 0, 00:23:48.677 "completed_nvme_io": 0, 00:23:48.677 "transports": [ 00:23:48.677 { 00:23:48.677 "trtype": "TCP" 00:23:48.677 } 00:23:48.677 ] 00:23:48.677 }, 00:23:48.677 { 00:23:48.677 "name": "nvmf_tgt_poll_group_002", 00:23:48.677 "admin_qpairs": 0, 00:23:48.677 "io_qpairs": 0, 00:23:48.677 "current_admin_qpairs": 0, 00:23:48.677 "current_io_qpairs": 0, 00:23:48.677 "pending_bdev_io": 0, 00:23:48.677 "completed_nvme_io": 0, 00:23:48.677 "transports": [ 00:23:48.677 { 00:23:48.677 "trtype": "TCP" 00:23:48.677 } 00:23:48.677 ] 00:23:48.677 }, 00:23:48.677 { 00:23:48.677 "name": "nvmf_tgt_poll_group_003", 00:23:48.677 "admin_qpairs": 0, 00:23:48.677 "io_qpairs": 0, 00:23:48.677 "current_admin_qpairs": 0, 00:23:48.677 "current_io_qpairs": 0, 00:23:48.677 "pending_bdev_io": 0, 00:23:48.677 "completed_nvme_io": 0, 00:23:48.677 "transports": [ 00:23:48.677 { 00:23:48.677 "trtype": "TCP" 00:23:48.677 } 00:23:48.677 ] 00:23:48.677 } 00:23:48.677 ] 00:23:48.677 }' 00:23:48.677 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:48.677 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:48.677 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:23:48.677 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:23:48.677 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3938764 00:23:56.811 Initializing NVMe Controllers 00:23:56.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:56.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:56.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:56.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:56.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:56.811 Initialization complete. Launching workers. 00:23:56.811 ======================================================== 00:23:56.811 Latency(us) 00:23:56.811 Device Information : IOPS MiB/s Average min max 00:23:56.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6361.10 24.85 10095.56 1192.23 56620.14 00:23:56.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6122.70 23.92 10453.79 1237.54 56733.81 00:23:56.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6391.90 24.97 10013.70 1167.93 58745.49 00:23:56.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6302.40 24.62 10156.20 1270.56 59437.26 00:23:56.811 ======================================================== 00:23:56.811 Total : 25178.10 98.35 10177.07 1167.93 59437.26 00:23:56.811 00:23:56.811 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:56.811 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:56.811 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:56.811 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:56.811 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:56.811 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:56.811 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:56.811 rmmod nvme_tcp 00:23:56.811 rmmod nvme_fabrics 00:23:56.811 rmmod nvme_keyring 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3938614 ']' 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3938614 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3938614 ']' 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3938614 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3938614 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3938614' 00:23:56.812 killing process with pid 3938614 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3938614 00:23:56.812 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3938614 00:23:56.812 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:56.812 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:56.812 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:56.812 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:56.812 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:56.812 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:56.812 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:56.812 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:56.812 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:56.812 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.812 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.812 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.112 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:00.112 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:24:00.112 00:24:00.112 real 0m54.000s 00:24:00.112 user 2m50.362s 00:24:00.112 sys 0m11.245s 00:24:00.112 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.112 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:00.112 ************************************ 00:24:00.112 END TEST nvmf_perf_adq 00:24:00.112 ************************************ 00:24:00.112 09:56:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:00.112 09:56:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:00.112 09:56:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.112 09:56:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:00.112 ************************************ 00:24:00.112 START TEST nvmf_shutdown 00:24:00.112 ************************************ 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:00.113 * Looking for test storage... 00:24:00.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:00.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.113 --rc genhtml_branch_coverage=1 00:24:00.113 --rc genhtml_function_coverage=1 00:24:00.113 --rc genhtml_legend=1 00:24:00.113 --rc geninfo_all_blocks=1 00:24:00.113 --rc geninfo_unexecuted_blocks=1 00:24:00.113 00:24:00.113 ' 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:00.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.113 --rc genhtml_branch_coverage=1 00:24:00.113 --rc genhtml_function_coverage=1 00:24:00.113 --rc genhtml_legend=1 00:24:00.113 --rc geninfo_all_blocks=1 00:24:00.113 --rc geninfo_unexecuted_blocks=1 00:24:00.113 00:24:00.113 ' 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:00.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.113 --rc genhtml_branch_coverage=1 00:24:00.113 --rc genhtml_function_coverage=1 00:24:00.113 --rc genhtml_legend=1 00:24:00.113 --rc geninfo_all_blocks=1 00:24:00.113 --rc geninfo_unexecuted_blocks=1 00:24:00.113 00:24:00.113 ' 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:00.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.113 --rc genhtml_branch_coverage=1 00:24:00.113 --rc genhtml_function_coverage=1 00:24:00.113 --rc genhtml_legend=1 00:24:00.113 --rc geninfo_all_blocks=1 00:24:00.113 --rc geninfo_unexecuted_blocks=1 00:24:00.113 00:24:00.113 ' 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.113 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:00.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:00.114 ************************************ 00:24:00.114 START TEST nvmf_shutdown_tc1 00:24:00.114 ************************************ 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:00.114 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:08.272 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:08.272 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:08.272 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:08.272 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:08.272 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:08.273 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.273 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.273 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:08.273 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:08.273 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.273 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.273 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.273 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.273 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:08.273 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.273 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.273 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.273 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:08.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:24:08.273 00:24:08.273 --- 10.0.0.2 ping statistics --- 00:24:08.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.273 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:24:08.273 00:24:08.273 --- 10.0.0.1 ping statistics --- 00:24:08.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.273 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3945334 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3945334 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3945334 ']' 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.273 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:08.273 [2024-11-27 09:56:23.145414] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:24:08.273 [2024-11-27 09:56:23.145485] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.273 [2024-11-27 09:56:23.245872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:08.273 [2024-11-27 09:56:23.299681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.273 [2024-11-27 09:56:23.299734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.273 [2024-11-27 09:56:23.299743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.273 [2024-11-27 09:56:23.299751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.273 [2024-11-27 09:56:23.299757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.273 [2024-11-27 09:56:23.301810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.273 [2024-11-27 09:56:23.301971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:08.273 [2024-11-27 09:56:23.302131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.273 [2024-11-27 09:56:23.302131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:08.534 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.534 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:08.534 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:08.534 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:08.534 09:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:08.796 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.796 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:08.796 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.796 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:08.796 [2024-11-27 09:56:24.009887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.796 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.796 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:08.796 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:08.796 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.796 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:08.796 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.797 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:08.797 Malloc1 00:24:08.797 [2024-11-27 09:56:24.139433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.797 Malloc2 00:24:08.797 Malloc3 00:24:08.797 Malloc4 00:24:09.058 Malloc5 00:24:09.058 Malloc6 00:24:09.058 Malloc7 00:24:09.058 Malloc8 00:24:09.058 Malloc9 00:24:09.321 Malloc10 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3945605 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3945605 /var/tmp/bdevperf.sock 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3945605 ']' 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:09.321 { 00:24:09.321 "params": { 00:24:09.321 "name": "Nvme$subsystem", 00:24:09.321 "trtype": "$TEST_TRANSPORT", 00:24:09.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.321 "adrfam": "ipv4", 00:24:09.321 "trsvcid": "$NVMF_PORT", 00:24:09.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.321 "hdgst": ${hdgst:-false}, 00:24:09.321 "ddgst": ${ddgst:-false} 00:24:09.321 }, 00:24:09.321 "method": "bdev_nvme_attach_controller" 00:24:09.321 } 00:24:09.321 EOF 00:24:09.321 )") 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:09.321 { 00:24:09.321 "params": { 00:24:09.321 "name": "Nvme$subsystem", 00:24:09.321 "trtype": "$TEST_TRANSPORT", 00:24:09.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.321 "adrfam": "ipv4", 00:24:09.321 "trsvcid": "$NVMF_PORT", 00:24:09.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.321 "hdgst": ${hdgst:-false}, 00:24:09.321 "ddgst": ${ddgst:-false} 00:24:09.321 }, 00:24:09.321 "method": "bdev_nvme_attach_controller" 00:24:09.321 } 00:24:09.321 EOF 00:24:09.321 )") 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:09.321 { 00:24:09.321 "params": { 00:24:09.321 "name": "Nvme$subsystem", 00:24:09.321 "trtype": "$TEST_TRANSPORT", 00:24:09.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.321 "adrfam": "ipv4", 00:24:09.321 "trsvcid": "$NVMF_PORT", 00:24:09.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.321 "hdgst": ${hdgst:-false}, 00:24:09.321 "ddgst": ${ddgst:-false} 00:24:09.321 }, 00:24:09.321 "method": "bdev_nvme_attach_controller" 00:24:09.321 } 00:24:09.321 EOF 00:24:09.321 )") 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:09.321 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:09.322 { 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme$subsystem", 00:24:09.322 "trtype": "$TEST_TRANSPORT", 00:24:09.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.322 "adrfam": "ipv4", 00:24:09.322 "trsvcid": "$NVMF_PORT", 00:24:09.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.322 "hdgst": ${hdgst:-false}, 00:24:09.322 "ddgst": ${ddgst:-false} 00:24:09.322 }, 00:24:09.322 "method": "bdev_nvme_attach_controller" 00:24:09.322 } 00:24:09.322 EOF 00:24:09.322 )") 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:09.322 { 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme$subsystem", 00:24:09.322 "trtype": "$TEST_TRANSPORT", 00:24:09.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.322 "adrfam": "ipv4", 00:24:09.322 "trsvcid": "$NVMF_PORT", 00:24:09.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.322 "hdgst": ${hdgst:-false}, 00:24:09.322 "ddgst": ${ddgst:-false} 00:24:09.322 }, 00:24:09.322 "method": "bdev_nvme_attach_controller" 00:24:09.322 } 00:24:09.322 EOF 00:24:09.322 )") 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:09.322 { 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme$subsystem", 00:24:09.322 "trtype": "$TEST_TRANSPORT", 00:24:09.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.322 "adrfam": "ipv4", 00:24:09.322 "trsvcid": "$NVMF_PORT", 00:24:09.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.322 "hdgst": ${hdgst:-false}, 00:24:09.322 "ddgst": ${ddgst:-false} 00:24:09.322 }, 00:24:09.322 "method": "bdev_nvme_attach_controller" 00:24:09.322 } 00:24:09.322 EOF 00:24:09.322 )") 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:09.322 [2024-11-27 09:56:24.657001] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:24:09.322 [2024-11-27 09:56:24.657075] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:09.322 { 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme$subsystem", 00:24:09.322 "trtype": "$TEST_TRANSPORT", 00:24:09.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.322 "adrfam": "ipv4", 00:24:09.322 "trsvcid": "$NVMF_PORT", 00:24:09.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.322 "hdgst": ${hdgst:-false}, 00:24:09.322 "ddgst": ${ddgst:-false} 00:24:09.322 }, 00:24:09.322 "method": "bdev_nvme_attach_controller" 00:24:09.322 } 00:24:09.322 EOF 00:24:09.322 )") 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:09.322 { 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme$subsystem", 00:24:09.322 "trtype": "$TEST_TRANSPORT", 00:24:09.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.322 "adrfam": "ipv4", 00:24:09.322 "trsvcid": "$NVMF_PORT", 00:24:09.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.322 "hdgst": ${hdgst:-false}, 00:24:09.322 "ddgst": ${ddgst:-false} 00:24:09.322 }, 00:24:09.322 "method": "bdev_nvme_attach_controller" 00:24:09.322 } 00:24:09.322 EOF 00:24:09.322 )") 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:09.322 { 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme$subsystem", 00:24:09.322 "trtype": "$TEST_TRANSPORT", 00:24:09.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.322 "adrfam": "ipv4", 00:24:09.322 "trsvcid": "$NVMF_PORT", 00:24:09.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.322 "hdgst": ${hdgst:-false}, 00:24:09.322 "ddgst": ${ddgst:-false} 00:24:09.322 }, 00:24:09.322 "method": "bdev_nvme_attach_controller" 00:24:09.322 } 00:24:09.322 EOF 00:24:09.322 )") 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:09.322 { 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme$subsystem", 00:24:09.322 "trtype": "$TEST_TRANSPORT", 00:24:09.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.322 "adrfam": "ipv4", 00:24:09.322 "trsvcid": "$NVMF_PORT", 00:24:09.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.322 "hdgst": ${hdgst:-false}, 00:24:09.322 "ddgst": ${ddgst:-false} 00:24:09.322 }, 00:24:09.322 "method": "bdev_nvme_attach_controller" 00:24:09.322 } 00:24:09.322 EOF 00:24:09.322 )") 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:09.322 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme1", 00:24:09.322 "trtype": "tcp", 00:24:09.322 "traddr": "10.0.0.2", 00:24:09.322 "adrfam": "ipv4", 00:24:09.322 "trsvcid": "4420", 00:24:09.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:09.322 "hdgst": false, 00:24:09.322 "ddgst": false 00:24:09.322 }, 00:24:09.322 "method": "bdev_nvme_attach_controller" 00:24:09.322 },{ 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme2", 00:24:09.322 "trtype": "tcp", 00:24:09.322 "traddr": "10.0.0.2", 00:24:09.322 "adrfam": "ipv4", 00:24:09.322 "trsvcid": "4420", 00:24:09.322 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:09.322 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:09.322 "hdgst": false, 00:24:09.322 "ddgst": false 00:24:09.322 }, 00:24:09.322 "method": "bdev_nvme_attach_controller" 00:24:09.322 },{ 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme3", 00:24:09.322 "trtype": "tcp", 00:24:09.322 "traddr": "10.0.0.2", 00:24:09.322 "adrfam": "ipv4", 00:24:09.322 "trsvcid": "4420", 00:24:09.322 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:09.322 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:09.322 "hdgst": false, 00:24:09.322 "ddgst": false 00:24:09.322 }, 00:24:09.322 "method": "bdev_nvme_attach_controller" 00:24:09.322 },{ 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme4", 00:24:09.322 "trtype": "tcp", 00:24:09.322 "traddr": "10.0.0.2", 00:24:09.322 "adrfam": "ipv4", 00:24:09.322 "trsvcid": "4420", 00:24:09.322 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:09.322 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:09.322 "hdgst": false, 00:24:09.322 "ddgst": false 00:24:09.322 }, 00:24:09.322 "method": "bdev_nvme_attach_controller" 00:24:09.322 },{ 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme5", 00:24:09.322 "trtype": "tcp", 00:24:09.322 "traddr": "10.0.0.2", 00:24:09.322 "adrfam": "ipv4", 00:24:09.322 "trsvcid": "4420", 00:24:09.322 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:09.322 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:09.322 "hdgst": false, 00:24:09.322 "ddgst": false 00:24:09.322 }, 00:24:09.322 "method": "bdev_nvme_attach_controller" 00:24:09.322 },{ 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme6", 00:24:09.322 "trtype": "tcp", 00:24:09.322 "traddr": "10.0.0.2", 00:24:09.322 "adrfam": "ipv4", 00:24:09.322 "trsvcid": "4420", 00:24:09.322 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:09.322 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:09.322 "hdgst": false, 00:24:09.322 "ddgst": false 00:24:09.322 }, 00:24:09.322 "method": "bdev_nvme_attach_controller" 00:24:09.322 },{ 00:24:09.322 "params": { 00:24:09.322 "name": "Nvme7", 00:24:09.322 "trtype": "tcp", 00:24:09.322 "traddr": "10.0.0.2", 00:24:09.323 "adrfam": "ipv4", 00:24:09.323 "trsvcid": "4420", 00:24:09.323 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:09.323 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:09.323 "hdgst": false, 00:24:09.323 "ddgst": false 00:24:09.323 }, 00:24:09.323 "method": "bdev_nvme_attach_controller" 00:24:09.323 },{ 00:24:09.323 "params": { 00:24:09.323 "name": "Nvme8", 00:24:09.323 "trtype": "tcp", 00:24:09.323 "traddr": "10.0.0.2", 00:24:09.323 "adrfam": "ipv4", 00:24:09.323 "trsvcid": "4420", 00:24:09.323 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:09.323 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:09.323 "hdgst": false, 00:24:09.323 "ddgst": false 00:24:09.323 }, 00:24:09.323 "method": "bdev_nvme_attach_controller" 00:24:09.323 },{ 00:24:09.323 "params": { 00:24:09.323 "name": "Nvme9", 00:24:09.323 "trtype": "tcp", 00:24:09.323 "traddr": "10.0.0.2", 00:24:09.323 "adrfam": "ipv4", 00:24:09.323 "trsvcid": "4420", 00:24:09.323 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:09.323 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:09.323 "hdgst": false, 00:24:09.323 "ddgst": false 00:24:09.323 }, 00:24:09.323 "method": "bdev_nvme_attach_controller" 00:24:09.323 },{ 00:24:09.323 "params": { 00:24:09.323 "name": "Nvme10", 00:24:09.323 "trtype": "tcp", 00:24:09.323 "traddr": "10.0.0.2", 00:24:09.323 "adrfam": "ipv4", 00:24:09.323 "trsvcid": "4420", 00:24:09.323 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:09.323 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:09.323 "hdgst": false, 00:24:09.323 "ddgst": false 00:24:09.323 }, 00:24:09.323 "method": "bdev_nvme_attach_controller" 00:24:09.323 }' 00:24:09.323 [2024-11-27 09:56:24.753316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.583 [2024-11-27 09:56:24.807060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.969 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.969 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:10.969 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:10.969 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.969 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:10.969 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.969 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3945605 00:24:10.969 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:24:10.969 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:24:11.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3945605 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3945334 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:11.914 { 00:24:11.914 "params": { 00:24:11.914 "name": "Nvme$subsystem", 00:24:11.914 "trtype": "$TEST_TRANSPORT", 00:24:11.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.914 "adrfam": "ipv4", 00:24:11.914 "trsvcid": "$NVMF_PORT", 00:24:11.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.914 "hdgst": ${hdgst:-false}, 00:24:11.914 "ddgst": ${ddgst:-false} 00:24:11.914 }, 00:24:11.914 "method": "bdev_nvme_attach_controller" 00:24:11.914 } 00:24:11.914 EOF 00:24:11.914 )") 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:11.914 { 00:24:11.914 "params": { 00:24:11.914 "name": "Nvme$subsystem", 00:24:11.914 "trtype": "$TEST_TRANSPORT", 00:24:11.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.914 "adrfam": "ipv4", 00:24:11.914 "trsvcid": "$NVMF_PORT", 00:24:11.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.914 "hdgst": ${hdgst:-false}, 00:24:11.914 "ddgst": ${ddgst:-false} 00:24:11.914 }, 00:24:11.914 "method": "bdev_nvme_attach_controller" 00:24:11.914 } 00:24:11.914 EOF 00:24:11.914 )") 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:11.914 { 00:24:11.914 "params": { 00:24:11.914 "name": "Nvme$subsystem", 00:24:11.914 "trtype": "$TEST_TRANSPORT", 00:24:11.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.914 "adrfam": "ipv4", 00:24:11.914 "trsvcid": "$NVMF_PORT", 00:24:11.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.914 "hdgst": ${hdgst:-false}, 00:24:11.914 "ddgst": ${ddgst:-false} 00:24:11.914 }, 00:24:11.914 "method": "bdev_nvme_attach_controller" 00:24:11.914 } 00:24:11.914 EOF 00:24:11.914 )") 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:11.914 { 00:24:11.914 "params": { 00:24:11.914 "name": "Nvme$subsystem", 00:24:11.914 "trtype": "$TEST_TRANSPORT", 00:24:11.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.914 "adrfam": "ipv4", 00:24:11.914 "trsvcid": "$NVMF_PORT", 00:24:11.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.914 "hdgst": ${hdgst:-false}, 00:24:11.914 "ddgst": ${ddgst:-false} 00:24:11.914 }, 00:24:11.914 "method": "bdev_nvme_attach_controller" 00:24:11.914 } 00:24:11.914 EOF 00:24:11.914 )") 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:11.914 { 00:24:11.914 "params": { 00:24:11.914 "name": "Nvme$subsystem", 00:24:11.914 "trtype": "$TEST_TRANSPORT", 00:24:11.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.914 "adrfam": "ipv4", 00:24:11.914 "trsvcid": "$NVMF_PORT", 00:24:11.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.914 "hdgst": ${hdgst:-false}, 00:24:11.914 "ddgst": ${ddgst:-false} 00:24:11.914 }, 00:24:11.914 "method": "bdev_nvme_attach_controller" 00:24:11.914 } 00:24:11.914 EOF 00:24:11.914 )") 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:11.914 { 00:24:11.914 "params": { 00:24:11.914 "name": "Nvme$subsystem", 00:24:11.914 "trtype": "$TEST_TRANSPORT", 00:24:11.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.914 "adrfam": "ipv4", 00:24:11.914 "trsvcid": "$NVMF_PORT", 00:24:11.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.914 "hdgst": ${hdgst:-false}, 00:24:11.914 "ddgst": ${ddgst:-false} 00:24:11.914 }, 00:24:11.914 "method": "bdev_nvme_attach_controller" 00:24:11.914 } 00:24:11.914 EOF 00:24:11.914 )") 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:11.914 { 00:24:11.914 "params": { 00:24:11.914 "name": "Nvme$subsystem", 00:24:11.914 "trtype": "$TEST_TRANSPORT", 00:24:11.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.914 "adrfam": "ipv4", 00:24:11.914 "trsvcid": "$NVMF_PORT", 00:24:11.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.914 "hdgst": ${hdgst:-false}, 00:24:11.914 "ddgst": ${ddgst:-false} 00:24:11.914 }, 00:24:11.914 "method": "bdev_nvme_attach_controller" 00:24:11.914 } 00:24:11.914 EOF 00:24:11.914 )") 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:11.914 { 00:24:11.914 "params": { 00:24:11.914 "name": "Nvme$subsystem", 00:24:11.914 "trtype": "$TEST_TRANSPORT", 00:24:11.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.914 "adrfam": "ipv4", 00:24:11.914 "trsvcid": "$NVMF_PORT", 00:24:11.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.914 "hdgst": ${hdgst:-false}, 00:24:11.914 "ddgst": ${ddgst:-false} 00:24:11.914 }, 00:24:11.914 "method": "bdev_nvme_attach_controller" 00:24:11.914 } 00:24:11.914 EOF 00:24:11.914 )") 00:24:11.914 [2024-11-27 09:56:27.309012] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:24:11.914 [2024-11-27 09:56:27.309068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3946196 ] 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:11.914 { 00:24:11.914 "params": { 00:24:11.914 "name": "Nvme$subsystem", 00:24:11.914 "trtype": "$TEST_TRANSPORT", 00:24:11.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.914 "adrfam": "ipv4", 00:24:11.914 "trsvcid": "$NVMF_PORT", 00:24:11.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.914 "hdgst": ${hdgst:-false}, 00:24:11.914 "ddgst": ${ddgst:-false} 00:24:11.914 }, 00:24:11.914 "method": "bdev_nvme_attach_controller" 00:24:11.914 } 00:24:11.914 EOF 00:24:11.914 )") 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:11.914 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:11.914 { 00:24:11.914 "params": { 00:24:11.914 "name": "Nvme$subsystem", 00:24:11.914 "trtype": "$TEST_TRANSPORT", 00:24:11.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.914 "adrfam": "ipv4", 00:24:11.914 "trsvcid": "$NVMF_PORT", 00:24:11.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.915 "hdgst": ${hdgst:-false}, 00:24:11.915 "ddgst": ${ddgst:-false} 00:24:11.915 }, 00:24:11.915 "method": "bdev_nvme_attach_controller" 00:24:11.915 } 00:24:11.915 EOF 00:24:11.915 )") 00:24:11.915 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:11.915 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:11.915 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:11.915 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:11.915 "params": { 00:24:11.915 "name": "Nvme1", 00:24:11.915 "trtype": "tcp", 00:24:11.915 "traddr": "10.0.0.2", 00:24:11.915 "adrfam": "ipv4", 00:24:11.915 "trsvcid": "4420", 00:24:11.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.915 "hdgst": false, 00:24:11.915 "ddgst": false 00:24:11.915 }, 00:24:11.915 "method": "bdev_nvme_attach_controller" 00:24:11.915 },{ 00:24:11.915 "params": { 00:24:11.915 "name": "Nvme2", 00:24:11.915 "trtype": "tcp", 00:24:11.915 "traddr": "10.0.0.2", 00:24:11.915 "adrfam": "ipv4", 00:24:11.915 "trsvcid": "4420", 00:24:11.915 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:11.915 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:11.915 "hdgst": false, 00:24:11.915 "ddgst": false 00:24:11.915 }, 00:24:11.915 "method": "bdev_nvme_attach_controller" 00:24:11.915 },{ 00:24:11.915 "params": { 00:24:11.915 "name": "Nvme3", 00:24:11.915 "trtype": "tcp", 00:24:11.915 "traddr": "10.0.0.2", 00:24:11.915 "adrfam": "ipv4", 00:24:11.915 "trsvcid": "4420", 00:24:11.915 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:11.915 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:11.915 "hdgst": false, 00:24:11.915 "ddgst": false 00:24:11.915 }, 00:24:11.915 "method": "bdev_nvme_attach_controller" 00:24:11.915 },{ 00:24:11.915 "params": { 00:24:11.915 "name": "Nvme4", 00:24:11.915 "trtype": "tcp", 00:24:11.915 "traddr": "10.0.0.2", 00:24:11.915 "adrfam": "ipv4", 00:24:11.915 "trsvcid": "4420", 00:24:11.915 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:11.915 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:11.915 "hdgst": false, 00:24:11.915 "ddgst": false 00:24:11.915 }, 00:24:11.915 "method": "bdev_nvme_attach_controller" 00:24:11.915 },{ 00:24:11.915 "params": { 00:24:11.915 "name": "Nvme5", 00:24:11.915 "trtype": "tcp", 00:24:11.915 "traddr": "10.0.0.2", 00:24:11.915 "adrfam": "ipv4", 00:24:11.915 "trsvcid": "4420", 00:24:11.915 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:11.915 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:11.915 "hdgst": false, 00:24:11.915 "ddgst": false 00:24:11.915 }, 00:24:11.915 "method": "bdev_nvme_attach_controller" 00:24:11.915 },{ 00:24:11.915 "params": { 00:24:11.915 "name": "Nvme6", 00:24:11.915 "trtype": "tcp", 00:24:11.915 "traddr": "10.0.0.2", 00:24:11.915 "adrfam": "ipv4", 00:24:11.915 "trsvcid": "4420", 00:24:11.915 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:11.915 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:11.915 "hdgst": false, 00:24:11.915 "ddgst": false 00:24:11.915 }, 00:24:11.915 "method": "bdev_nvme_attach_controller" 00:24:11.915 },{ 00:24:11.915 "params": { 00:24:11.915 "name": "Nvme7", 00:24:11.915 "trtype": "tcp", 00:24:11.915 "traddr": "10.0.0.2", 00:24:11.915 "adrfam": "ipv4", 00:24:11.915 "trsvcid": "4420", 00:24:11.915 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:11.915 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:11.915 "hdgst": false, 00:24:11.915 "ddgst": false 00:24:11.915 }, 00:24:11.915 "method": "bdev_nvme_attach_controller" 00:24:11.915 },{ 00:24:11.915 "params": { 00:24:11.915 "name": "Nvme8", 00:24:11.915 "trtype": "tcp", 00:24:11.915 "traddr": "10.0.0.2", 00:24:11.915 "adrfam": "ipv4", 00:24:11.915 "trsvcid": "4420", 00:24:11.915 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:11.915 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:11.915 "hdgst": false, 00:24:11.915 "ddgst": false 00:24:11.915 }, 00:24:11.915 "method": "bdev_nvme_attach_controller" 00:24:11.915 },{ 00:24:11.915 "params": { 00:24:11.915 "name": "Nvme9", 00:24:11.915 "trtype": "tcp", 00:24:11.915 "traddr": "10.0.0.2", 00:24:11.915 "adrfam": "ipv4", 00:24:11.915 "trsvcid": "4420", 00:24:11.915 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:11.915 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:11.915 "hdgst": false, 00:24:11.915 "ddgst": false 00:24:11.915 }, 00:24:11.915 "method": "bdev_nvme_attach_controller" 00:24:11.915 },{ 00:24:11.915 "params": { 00:24:11.915 "name": "Nvme10", 00:24:11.915 "trtype": "tcp", 00:24:11.915 "traddr": "10.0.0.2", 00:24:11.915 "adrfam": "ipv4", 00:24:11.915 "trsvcid": "4420", 00:24:11.915 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:11.915 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:11.915 "hdgst": false, 00:24:11.915 "ddgst": false 00:24:11.915 }, 00:24:11.915 "method": "bdev_nvme_attach_controller" 00:24:11.915 }' 00:24:12.176 [2024-11-27 09:56:27.398288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.176 [2024-11-27 09:56:27.434075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.562 Running I/O for 1 seconds... 00:24:14.503 1856.00 IOPS, 116.00 MiB/s 00:24:14.503 Latency(us) 00:24:14.503 [2024-11-27T08:56:29.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.503 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.503 Verification LBA range: start 0x0 length 0x400 00:24:14.503 Nvme1n1 : 1.09 233.90 14.62 0.00 0.00 270512.85 14854.83 251658.24 00:24:14.503 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.503 Verification LBA range: start 0x0 length 0x400 00:24:14.503 Nvme2n1 : 1.14 224.77 14.05 0.00 0.00 276454.61 15291.73 253405.87 00:24:14.503 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.503 Verification LBA range: start 0x0 length 0x400 00:24:14.503 Nvme3n1 : 1.13 226.70 14.17 0.00 0.00 269290.24 21189.97 246415.36 00:24:14.503 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.503 Verification LBA range: start 0x0 length 0x400 00:24:14.503 Nvme4n1 : 1.09 234.60 14.66 0.00 0.00 255294.08 18022.40 251658.24 00:24:14.503 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.503 Verification LBA range: start 0x0 length 0x400 00:24:14.503 Nvme5n1 : 1.13 227.45 14.22 0.00 0.00 259062.40 19442.35 249910.61 00:24:14.503 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.503 Verification LBA range: start 0x0 length 0x400 00:24:14.503 Nvme6n1 : 1.14 224.58 14.04 0.00 0.00 257563.31 14964.05 251658.24 00:24:14.503 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.503 Verification LBA range: start 0x0 length 0x400 00:24:14.503 Nvme7n1 : 1.20 266.61 16.66 0.00 0.00 214384.30 14417.92 232434.35 00:24:14.503 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.503 Verification LBA range: start 0x0 length 0x400 00:24:14.503 Nvme8n1 : 1.19 275.47 17.22 0.00 0.00 195826.32 2102.61 244667.73 00:24:14.503 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.503 Verification LBA range: start 0x0 length 0x400 00:24:14.503 Nvme9n1 : 1.21 263.66 16.48 0.00 0.00 209338.45 12451.84 265639.25 00:24:14.503 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:14.503 Verification LBA range: start 0x0 length 0x400 00:24:14.503 Nvme10n1 : 1.21 265.52 16.59 0.00 0.00 203837.70 7427.41 286610.77 00:24:14.503 [2024-11-27T08:56:29.969Z] =================================================================================================================== 00:24:14.503 [2024-11-27T08:56:29.969Z] Total : 2443.26 152.70 0.00 0.00 237842.01 2102.61 286610.77 00:24:14.764 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:14.764 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:14.764 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:14.764 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:14.764 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:14.764 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.764 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:14.764 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:14.764 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:14.764 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.764 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:14.764 rmmod nvme_tcp 00:24:14.764 rmmod nvme_fabrics 00:24:14.764 rmmod nvme_keyring 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3945334 ']' 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3945334 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3945334 ']' 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3945334 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3945334 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3945334' 00:24:14.764 killing process with pid 3945334 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3945334 00:24:14.764 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3945334 00:24:15.024 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:15.024 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:15.025 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:15.025 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:15.025 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:24:15.025 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:15.025 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:24:15.025 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:15.025 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:15.025 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.025 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.025 09:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:17.571 00:24:17.571 real 0m16.976s 00:24:17.571 user 0m34.286s 00:24:17.571 sys 0m7.052s 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:17.571 ************************************ 00:24:17.571 END TEST nvmf_shutdown_tc1 00:24:17.571 ************************************ 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:17.571 ************************************ 00:24:17.571 START TEST nvmf_shutdown_tc2 00:24:17.571 ************************************ 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:17.571 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:17.572 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:17.572 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:17.572 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:17.572 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:17.572 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:17.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:24:17.573 00:24:17.573 --- 10.0.0.2 ping statistics --- 00:24:17.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.573 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:24:17.573 00:24:17.573 --- 10.0.0.1 ping statistics --- 00:24:17.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.573 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3947309 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3947309 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3947309 ']' 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.573 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.573 [2024-11-27 09:56:32.979939] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:24:17.573 [2024-11-27 09:56:32.980007] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.835 [2024-11-27 09:56:33.075537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.835 [2024-11-27 09:56:33.109503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.835 [2024-11-27 09:56:33.109533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.835 [2024-11-27 09:56:33.109539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.835 [2024-11-27 09:56:33.109543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.835 [2024-11-27 09:56:33.109548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.835 [2024-11-27 09:56:33.110872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.835 [2024-11-27 09:56:33.111025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.835 [2024-11-27 09:56:33.111192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:17.835 [2024-11-27 09:56:33.111212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.407 [2024-11-27 09:56:33.821780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.407 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:18.667 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.667 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:18.667 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:18.667 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:18.667 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:18.667 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.667 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.667 Malloc1 00:24:18.667 [2024-11-27 09:56:33.928988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.667 Malloc2 00:24:18.667 Malloc3 00:24:18.667 Malloc4 00:24:18.667 Malloc5 00:24:18.667 Malloc6 00:24:18.928 Malloc7 00:24:18.928 Malloc8 00:24:18.928 Malloc9 00:24:18.928 Malloc10 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3947690 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3947690 /var/tmp/bdevperf.sock 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3947690 ']' 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:18.928 { 00:24:18.928 "params": { 00:24:18.928 "name": "Nvme$subsystem", 00:24:18.928 "trtype": "$TEST_TRANSPORT", 00:24:18.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.928 "adrfam": "ipv4", 00:24:18.928 "trsvcid": "$NVMF_PORT", 00:24:18.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.928 "hdgst": ${hdgst:-false}, 00:24:18.928 "ddgst": ${ddgst:-false} 00:24:18.928 }, 00:24:18.928 "method": "bdev_nvme_attach_controller" 00:24:18.928 } 00:24:18.928 EOF 00:24:18.928 )") 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:18.928 { 00:24:18.928 "params": { 00:24:18.928 "name": "Nvme$subsystem", 00:24:18.928 "trtype": "$TEST_TRANSPORT", 00:24:18.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.928 "adrfam": "ipv4", 00:24:18.928 "trsvcid": "$NVMF_PORT", 00:24:18.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.928 "hdgst": ${hdgst:-false}, 00:24:18.928 "ddgst": ${ddgst:-false} 00:24:18.928 }, 00:24:18.928 "method": "bdev_nvme_attach_controller" 00:24:18.928 } 00:24:18.928 EOF 00:24:18.928 )") 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:18.928 { 00:24:18.928 "params": { 00:24:18.928 "name": "Nvme$subsystem", 00:24:18.928 "trtype": "$TEST_TRANSPORT", 00:24:18.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.928 "adrfam": "ipv4", 00:24:18.928 "trsvcid": "$NVMF_PORT", 00:24:18.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.928 "hdgst": ${hdgst:-false}, 00:24:18.928 "ddgst": ${ddgst:-false} 00:24:18.928 }, 00:24:18.928 "method": "bdev_nvme_attach_controller" 00:24:18.928 } 00:24:18.928 EOF 00:24:18.928 )") 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:18.928 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:18.928 { 00:24:18.928 "params": { 00:24:18.928 "name": "Nvme$subsystem", 00:24:18.928 "trtype": "$TEST_TRANSPORT", 00:24:18.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.929 "adrfam": "ipv4", 00:24:18.929 "trsvcid": "$NVMF_PORT", 00:24:18.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.929 "hdgst": ${hdgst:-false}, 00:24:18.929 "ddgst": ${ddgst:-false} 00:24:18.929 }, 00:24:18.929 "method": "bdev_nvme_attach_controller" 00:24:18.929 } 00:24:18.929 EOF 00:24:18.929 )") 00:24:18.929 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:18.929 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:18.929 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:18.929 { 00:24:18.929 "params": { 00:24:18.929 "name": "Nvme$subsystem", 00:24:18.929 "trtype": "$TEST_TRANSPORT", 00:24:18.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.929 "adrfam": "ipv4", 00:24:18.929 "trsvcid": "$NVMF_PORT", 00:24:18.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.929 "hdgst": ${hdgst:-false}, 00:24:18.929 "ddgst": ${ddgst:-false} 00:24:18.929 }, 00:24:18.929 "method": "bdev_nvme_attach_controller" 00:24:18.929 } 00:24:18.929 EOF 00:24:18.929 )") 00:24:18.929 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:18.929 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:18.929 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:18.929 { 00:24:18.929 "params": { 00:24:18.929 "name": "Nvme$subsystem", 00:24:18.929 "trtype": "$TEST_TRANSPORT", 00:24:18.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.929 "adrfam": "ipv4", 00:24:18.929 "trsvcid": "$NVMF_PORT", 00:24:18.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.929 "hdgst": ${hdgst:-false}, 00:24:18.929 "ddgst": ${ddgst:-false} 00:24:18.929 }, 00:24:18.929 "method": "bdev_nvme_attach_controller" 00:24:18.929 } 00:24:18.929 EOF 00:24:18.929 )") 00:24:18.929 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:18.929 [2024-11-27 09:56:34.376022] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:24:18.929 [2024-11-27 09:56:34.376076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947690 ] 00:24:18.929 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:18.929 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:18.929 { 00:24:18.929 "params": { 00:24:18.929 "name": "Nvme$subsystem", 00:24:18.929 "trtype": "$TEST_TRANSPORT", 00:24:18.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.929 "adrfam": "ipv4", 00:24:18.929 "trsvcid": "$NVMF_PORT", 00:24:18.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.929 "hdgst": ${hdgst:-false}, 00:24:18.929 "ddgst": ${ddgst:-false} 00:24:18.929 }, 00:24:18.929 "method": "bdev_nvme_attach_controller" 00:24:18.929 } 00:24:18.929 EOF 00:24:18.929 )") 00:24:18.929 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:18.929 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:18.929 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:18.929 { 00:24:18.929 "params": { 00:24:18.929 "name": "Nvme$subsystem", 00:24:18.929 "trtype": "$TEST_TRANSPORT", 00:24:18.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.929 "adrfam": "ipv4", 00:24:18.929 "trsvcid": "$NVMF_PORT", 00:24:18.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.929 "hdgst": ${hdgst:-false}, 00:24:18.929 "ddgst": ${ddgst:-false} 00:24:18.929 }, 00:24:18.929 "method": "bdev_nvme_attach_controller" 00:24:18.929 } 00:24:18.929 EOF 00:24:18.929 )") 00:24:18.929 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:19.189 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:19.189 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:19.189 { 00:24:19.189 "params": { 00:24:19.189 "name": "Nvme$subsystem", 00:24:19.189 "trtype": "$TEST_TRANSPORT", 00:24:19.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:19.189 "adrfam": "ipv4", 00:24:19.189 "trsvcid": "$NVMF_PORT", 00:24:19.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:19.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:19.189 "hdgst": ${hdgst:-false}, 00:24:19.189 "ddgst": ${ddgst:-false} 00:24:19.189 }, 00:24:19.189 "method": "bdev_nvme_attach_controller" 00:24:19.189 } 00:24:19.189 EOF 00:24:19.189 )") 00:24:19.189 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:19.189 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:19.189 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:19.189 { 00:24:19.189 "params": { 00:24:19.189 "name": "Nvme$subsystem", 00:24:19.189 "trtype": "$TEST_TRANSPORT", 00:24:19.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:19.189 "adrfam": "ipv4", 00:24:19.189 "trsvcid": "$NVMF_PORT", 00:24:19.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:19.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:19.189 "hdgst": ${hdgst:-false}, 00:24:19.189 "ddgst": ${ddgst:-false} 00:24:19.189 }, 00:24:19.189 "method": "bdev_nvme_attach_controller" 00:24:19.189 } 00:24:19.190 EOF 00:24:19.190 )") 00:24:19.190 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:19.190 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:24:19.190 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:24:19.190 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:19.190 "params": { 00:24:19.190 "name": "Nvme1", 00:24:19.190 "trtype": "tcp", 00:24:19.190 "traddr": "10.0.0.2", 00:24:19.190 "adrfam": "ipv4", 00:24:19.190 "trsvcid": "4420", 00:24:19.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:19.190 "hdgst": false, 00:24:19.190 "ddgst": false 00:24:19.190 }, 00:24:19.190 "method": "bdev_nvme_attach_controller" 00:24:19.190 },{ 00:24:19.190 "params": { 00:24:19.190 "name": "Nvme2", 00:24:19.190 "trtype": "tcp", 00:24:19.190 "traddr": "10.0.0.2", 00:24:19.190 "adrfam": "ipv4", 00:24:19.190 "trsvcid": "4420", 00:24:19.190 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:19.190 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:19.190 "hdgst": false, 00:24:19.190 "ddgst": false 00:24:19.190 }, 00:24:19.190 "method": "bdev_nvme_attach_controller" 00:24:19.190 },{ 00:24:19.190 "params": { 00:24:19.190 "name": "Nvme3", 00:24:19.190 "trtype": "tcp", 00:24:19.190 "traddr": "10.0.0.2", 00:24:19.190 "adrfam": "ipv4", 00:24:19.190 "trsvcid": "4420", 00:24:19.190 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:19.190 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:19.190 "hdgst": false, 00:24:19.190 "ddgst": false 00:24:19.190 }, 00:24:19.190 "method": "bdev_nvme_attach_controller" 00:24:19.190 },{ 00:24:19.190 "params": { 00:24:19.190 "name": "Nvme4", 00:24:19.190 "trtype": "tcp", 00:24:19.190 "traddr": "10.0.0.2", 00:24:19.190 "adrfam": "ipv4", 00:24:19.190 "trsvcid": "4420", 00:24:19.190 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:19.190 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:19.190 "hdgst": false, 00:24:19.190 "ddgst": false 00:24:19.190 }, 00:24:19.190 "method": "bdev_nvme_attach_controller" 00:24:19.190 },{ 00:24:19.190 "params": { 00:24:19.190 "name": "Nvme5", 00:24:19.190 "trtype": "tcp", 00:24:19.190 "traddr": "10.0.0.2", 00:24:19.190 "adrfam": "ipv4", 00:24:19.190 "trsvcid": "4420", 00:24:19.190 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:19.190 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:19.190 "hdgst": false, 00:24:19.190 "ddgst": false 00:24:19.190 }, 00:24:19.190 "method": "bdev_nvme_attach_controller" 00:24:19.190 },{ 00:24:19.190 "params": { 00:24:19.190 "name": "Nvme6", 00:24:19.190 "trtype": "tcp", 00:24:19.190 "traddr": "10.0.0.2", 00:24:19.190 "adrfam": "ipv4", 00:24:19.190 "trsvcid": "4420", 00:24:19.190 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:19.190 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:19.190 "hdgst": false, 00:24:19.190 "ddgst": false 00:24:19.190 }, 00:24:19.190 "method": "bdev_nvme_attach_controller" 00:24:19.190 },{ 00:24:19.190 "params": { 00:24:19.190 "name": "Nvme7", 00:24:19.190 "trtype": "tcp", 00:24:19.190 "traddr": "10.0.0.2", 00:24:19.190 "adrfam": "ipv4", 00:24:19.190 "trsvcid": "4420", 00:24:19.190 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:19.190 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:19.190 "hdgst": false, 00:24:19.190 "ddgst": false 00:24:19.190 }, 00:24:19.190 "method": "bdev_nvme_attach_controller" 00:24:19.190 },{ 00:24:19.190 "params": { 00:24:19.190 "name": "Nvme8", 00:24:19.190 "trtype": "tcp", 00:24:19.190 "traddr": "10.0.0.2", 00:24:19.190 "adrfam": "ipv4", 00:24:19.190 "trsvcid": "4420", 00:24:19.190 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:19.190 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:19.190 "hdgst": false, 00:24:19.190 "ddgst": false 00:24:19.190 }, 00:24:19.190 "method": "bdev_nvme_attach_controller" 00:24:19.190 },{ 00:24:19.190 "params": { 00:24:19.190 "name": "Nvme9", 00:24:19.190 "trtype": "tcp", 00:24:19.190 "traddr": "10.0.0.2", 00:24:19.190 "adrfam": "ipv4", 00:24:19.190 "trsvcid": "4420", 00:24:19.190 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:19.190 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:19.190 "hdgst": false, 00:24:19.190 "ddgst": false 00:24:19.190 }, 00:24:19.190 "method": "bdev_nvme_attach_controller" 00:24:19.190 },{ 00:24:19.190 "params": { 00:24:19.190 "name": "Nvme10", 00:24:19.190 "trtype": "tcp", 00:24:19.190 "traddr": "10.0.0.2", 00:24:19.190 "adrfam": "ipv4", 00:24:19.190 "trsvcid": "4420", 00:24:19.190 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:19.190 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:19.190 "hdgst": false, 00:24:19.190 "ddgst": false 00:24:19.190 }, 00:24:19.190 "method": "bdev_nvme_attach_controller" 00:24:19.190 }' 00:24:19.190 [2024-11-27 09:56:34.469808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.190 [2024-11-27 09:56:34.506152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.100 Running I/O for 10 seconds... 00:24:21.100 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:21.101 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:21.360 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:21.360 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:21.360 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:21.360 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:21.360 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.360 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.360 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.360 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:21.360 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:21.360 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3947690 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3947690 ']' 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3947690 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.621 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3947690 00:24:21.621 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:21.621 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:21.621 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3947690' 00:24:21.621 killing process with pid 3947690 00:24:21.621 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3947690 00:24:21.621 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3947690 00:24:21.882 Received shutdown signal, test time was about 0.996571 seconds 00:24:21.882 00:24:21.882 Latency(us) 00:24:21.882 [2024-11-27T08:56:37.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.882 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.882 Verification LBA range: start 0x0 length 0x400 00:24:21.882 Nvme1n1 : 0.95 202.89 12.68 0.00 0.00 311749.40 33423.36 234181.97 00:24:21.882 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.882 Verification LBA range: start 0x0 length 0x400 00:24:21.882 Nvme2n1 : 0.96 267.60 16.72 0.00 0.00 231142.19 19223.89 239424.85 00:24:21.882 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.882 Verification LBA range: start 0x0 length 0x400 00:24:21.882 Nvme3n1 : 0.96 265.62 16.60 0.00 0.00 228488.53 34515.63 225443.84 00:24:21.882 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.882 Verification LBA range: start 0x0 length 0x400 00:24:21.882 Nvme4n1 : 1.00 261.13 16.32 0.00 0.00 217946.98 3058.35 239424.85 00:24:21.882 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.882 Verification LBA range: start 0x0 length 0x400 00:24:21.882 Nvme5n1 : 0.93 205.86 12.87 0.00 0.00 281844.34 18459.31 242920.11 00:24:21.882 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.882 Verification LBA range: start 0x0 length 0x400 00:24:21.882 Nvme6n1 : 0.94 204.03 12.75 0.00 0.00 278545.07 16056.32 251658.24 00:24:21.882 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.882 Verification LBA range: start 0x0 length 0x400 00:24:21.882 Nvme7n1 : 0.96 270.24 16.89 0.00 0.00 205265.11 3604.48 244667.73 00:24:21.882 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.882 Verification LBA range: start 0x0 length 0x400 00:24:21.882 Nvme8n1 : 0.97 264.75 16.55 0.00 0.00 205114.77 13871.79 248162.99 00:24:21.882 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.882 Verification LBA range: start 0x0 length 0x400 00:24:21.882 Nvme9n1 : 0.96 266.35 16.65 0.00 0.00 199401.60 17913.17 258648.75 00:24:21.882 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.882 Verification LBA range: start 0x0 length 0x400 00:24:21.882 Nvme10n1 : 0.95 202.22 12.64 0.00 0.00 256009.67 15619.41 267386.88 00:24:21.882 [2024-11-27T08:56:37.348Z] =================================================================================================================== 00:24:21.882 [2024-11-27T08:56:37.348Z] Total : 2410.70 150.67 0.00 0.00 236977.95 3058.35 267386.88 00:24:21.882 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:24:22.825 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3947309 00:24:22.825 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:24:22.825 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:22.825 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:22.825 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:22.825 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:22.825 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:22.825 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:24:22.825 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:22.825 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:24:22.825 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:22.825 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:22.825 rmmod nvme_tcp 00:24:22.825 rmmod nvme_fabrics 00:24:22.825 rmmod nvme_keyring 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3947309 ']' 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3947309 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3947309 ']' 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3947309 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3947309 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3947309' 00:24:23.088 killing process with pid 3947309 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3947309 00:24:23.088 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3947309 00:24:23.349 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:23.349 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:23.349 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:23.349 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:24:23.349 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:24:23.349 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:23.350 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:24:23.350 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.350 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.350 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.350 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.350 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.275 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:25.275 00:24:25.275 real 0m8.146s 00:24:25.275 user 0m25.039s 00:24:25.275 sys 0m1.330s 00:24:25.275 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.275 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.275 ************************************ 00:24:25.275 END TEST nvmf_shutdown_tc2 00:24:25.275 ************************************ 00:24:25.275 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:25.275 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:25.275 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:25.275 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:25.538 ************************************ 00:24:25.538 START TEST nvmf_shutdown_tc3 00:24:25.538 ************************************ 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:25.538 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:25.538 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.538 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:25.539 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:25.539 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:25.539 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.800 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.800 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:25.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:24:25.801 00:24:25.801 --- 10.0.0.2 ping statistics --- 00:24:25.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.801 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:24:25.801 00:24:25.801 --- 10.0.0.1 ping statistics --- 00:24:25.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.801 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3949152 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3949152 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3949152 ']' 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.801 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:25.801 [2024-11-27 09:56:41.213116] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:24:25.801 [2024-11-27 09:56:41.213192] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.061 [2024-11-27 09:56:41.308752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:26.061 [2024-11-27 09:56:41.342659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.061 [2024-11-27 09:56:41.342688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.061 [2024-11-27 09:56:41.342694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.061 [2024-11-27 09:56:41.342699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.062 [2024-11-27 09:56:41.342704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.062 [2024-11-27 09:56:41.344135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.062 [2024-11-27 09:56:41.344297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:26.062 [2024-11-27 09:56:41.344410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.062 [2024-11-27 09:56:41.344412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:26.633 [2024-11-27 09:56:42.066749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:26.633 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.894 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:26.894 Malloc1 00:24:26.894 [2024-11-27 09:56:42.183072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.894 Malloc2 00:24:26.894 Malloc3 00:24:26.894 Malloc4 00:24:26.894 Malloc5 00:24:26.894 Malloc6 00:24:27.157 Malloc7 00:24:27.157 Malloc8 00:24:27.157 Malloc9 00:24:27.157 Malloc10 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3949481 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3949481 /var/tmp/bdevperf.sock 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3949481 ']' 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:27.157 { 00:24:27.157 "params": { 00:24:27.157 "name": "Nvme$subsystem", 00:24:27.157 "trtype": "$TEST_TRANSPORT", 00:24:27.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:27.157 "adrfam": "ipv4", 00:24:27.157 "trsvcid": "$NVMF_PORT", 00:24:27.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:27.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:27.157 "hdgst": ${hdgst:-false}, 00:24:27.157 "ddgst": ${ddgst:-false} 00:24:27.157 }, 00:24:27.157 "method": "bdev_nvme_attach_controller" 00:24:27.157 } 00:24:27.157 EOF 00:24:27.157 )") 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:27.157 { 00:24:27.157 "params": { 00:24:27.157 "name": "Nvme$subsystem", 00:24:27.157 "trtype": "$TEST_TRANSPORT", 00:24:27.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:27.157 "adrfam": "ipv4", 00:24:27.157 "trsvcid": "$NVMF_PORT", 00:24:27.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:27.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:27.157 "hdgst": ${hdgst:-false}, 00:24:27.157 "ddgst": ${ddgst:-false} 00:24:27.157 }, 00:24:27.157 "method": "bdev_nvme_attach_controller" 00:24:27.157 } 00:24:27.157 EOF 00:24:27.157 )") 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:27.157 { 00:24:27.157 "params": { 00:24:27.157 "name": "Nvme$subsystem", 00:24:27.157 "trtype": "$TEST_TRANSPORT", 00:24:27.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:27.157 "adrfam": "ipv4", 00:24:27.157 "trsvcid": "$NVMF_PORT", 00:24:27.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:27.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:27.157 "hdgst": ${hdgst:-false}, 00:24:27.157 "ddgst": ${ddgst:-false} 00:24:27.157 }, 00:24:27.157 "method": "bdev_nvme_attach_controller" 00:24:27.157 } 00:24:27.157 EOF 00:24:27.157 )") 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:27.157 { 00:24:27.157 "params": { 00:24:27.157 "name": "Nvme$subsystem", 00:24:27.157 "trtype": "$TEST_TRANSPORT", 00:24:27.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:27.157 "adrfam": "ipv4", 00:24:27.157 "trsvcid": "$NVMF_PORT", 00:24:27.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:27.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:27.157 "hdgst": ${hdgst:-false}, 00:24:27.157 "ddgst": ${ddgst:-false} 00:24:27.157 }, 00:24:27.157 "method": "bdev_nvme_attach_controller" 00:24:27.157 } 00:24:27.157 EOF 00:24:27.157 )") 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:27.157 { 00:24:27.157 "params": { 00:24:27.157 "name": "Nvme$subsystem", 00:24:27.157 "trtype": "$TEST_TRANSPORT", 00:24:27.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:27.157 "adrfam": "ipv4", 00:24:27.157 "trsvcid": "$NVMF_PORT", 00:24:27.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:27.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:27.157 "hdgst": ${hdgst:-false}, 00:24:27.157 "ddgst": ${ddgst:-false} 00:24:27.157 }, 00:24:27.157 "method": "bdev_nvme_attach_controller" 00:24:27.157 } 00:24:27.157 EOF 00:24:27.157 )") 00:24:27.157 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:27.418 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:27.418 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:27.418 { 00:24:27.418 "params": { 00:24:27.418 "name": "Nvme$subsystem", 00:24:27.418 "trtype": "$TEST_TRANSPORT", 00:24:27.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:27.418 "adrfam": "ipv4", 00:24:27.418 "trsvcid": "$NVMF_PORT", 00:24:27.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:27.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:27.418 "hdgst": ${hdgst:-false}, 00:24:27.418 "ddgst": ${ddgst:-false} 00:24:27.418 }, 00:24:27.418 "method": "bdev_nvme_attach_controller" 00:24:27.418 } 00:24:27.418 EOF 00:24:27.418 )") 00:24:27.418 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:27.418 [2024-11-27 09:56:42.631138] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:24:27.418 [2024-11-27 09:56:42.631204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949481 ] 00:24:27.418 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:27.418 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:27.418 { 00:24:27.418 "params": { 00:24:27.418 "name": "Nvme$subsystem", 00:24:27.418 "trtype": "$TEST_TRANSPORT", 00:24:27.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:27.418 "adrfam": "ipv4", 00:24:27.418 "trsvcid": "$NVMF_PORT", 00:24:27.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:27.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:27.418 "hdgst": ${hdgst:-false}, 00:24:27.418 "ddgst": ${ddgst:-false} 00:24:27.418 }, 00:24:27.418 "method": "bdev_nvme_attach_controller" 00:24:27.418 } 00:24:27.419 EOF 00:24:27.419 )") 00:24:27.419 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:27.419 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:27.419 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:27.419 { 00:24:27.419 "params": { 00:24:27.419 "name": "Nvme$subsystem", 00:24:27.419 "trtype": "$TEST_TRANSPORT", 00:24:27.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:27.419 "adrfam": "ipv4", 00:24:27.419 "trsvcid": "$NVMF_PORT", 00:24:27.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:27.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:27.419 "hdgst": ${hdgst:-false}, 00:24:27.419 "ddgst": ${ddgst:-false} 00:24:27.419 }, 00:24:27.419 "method": "bdev_nvme_attach_controller" 00:24:27.419 } 00:24:27.419 EOF 00:24:27.419 )") 00:24:27.419 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:27.419 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:27.419 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:27.419 { 00:24:27.419 "params": { 00:24:27.419 "name": "Nvme$subsystem", 00:24:27.419 "trtype": "$TEST_TRANSPORT", 00:24:27.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:27.419 "adrfam": "ipv4", 00:24:27.419 "trsvcid": "$NVMF_PORT", 00:24:27.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:27.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:27.419 "hdgst": ${hdgst:-false}, 00:24:27.419 "ddgst": ${ddgst:-false} 00:24:27.419 }, 00:24:27.419 "method": "bdev_nvme_attach_controller" 00:24:27.419 } 00:24:27.419 EOF 00:24:27.419 )") 00:24:27.419 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:27.419 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:27.419 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:27.419 { 00:24:27.419 "params": { 00:24:27.419 "name": "Nvme$subsystem", 00:24:27.419 "trtype": "$TEST_TRANSPORT", 00:24:27.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:27.419 "adrfam": "ipv4", 00:24:27.419 "trsvcid": "$NVMF_PORT", 00:24:27.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:27.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:27.419 "hdgst": ${hdgst:-false}, 00:24:27.419 "ddgst": ${ddgst:-false} 00:24:27.419 }, 00:24:27.419 "method": "bdev_nvme_attach_controller" 00:24:27.419 } 00:24:27.419 EOF 00:24:27.419 )") 00:24:27.419 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:27.419 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:24:27.419 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:24:27.419 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:27.419 "params": { 00:24:27.419 "name": "Nvme1", 00:24:27.419 "trtype": "tcp", 00:24:27.419 "traddr": "10.0.0.2", 00:24:27.419 "adrfam": "ipv4", 00:24:27.419 "trsvcid": "4420", 00:24:27.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:27.419 "hdgst": false, 00:24:27.419 "ddgst": false 00:24:27.419 }, 00:24:27.419 "method": "bdev_nvme_attach_controller" 00:24:27.419 },{ 00:24:27.419 "params": { 00:24:27.419 "name": "Nvme2", 00:24:27.419 "trtype": "tcp", 00:24:27.419 "traddr": "10.0.0.2", 00:24:27.419 "adrfam": "ipv4", 00:24:27.419 "trsvcid": "4420", 00:24:27.419 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:27.419 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:27.419 "hdgst": false, 00:24:27.419 "ddgst": false 00:24:27.419 }, 00:24:27.419 "method": "bdev_nvme_attach_controller" 00:24:27.419 },{ 00:24:27.419 "params": { 00:24:27.419 "name": "Nvme3", 00:24:27.419 "trtype": "tcp", 00:24:27.419 "traddr": "10.0.0.2", 00:24:27.419 "adrfam": "ipv4", 00:24:27.419 "trsvcid": "4420", 00:24:27.419 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:27.419 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:27.419 "hdgst": false, 00:24:27.419 "ddgst": false 00:24:27.419 }, 00:24:27.419 "method": "bdev_nvme_attach_controller" 00:24:27.419 },{ 00:24:27.419 "params": { 00:24:27.419 "name": "Nvme4", 00:24:27.419 "trtype": "tcp", 00:24:27.419 "traddr": "10.0.0.2", 00:24:27.419 "adrfam": "ipv4", 00:24:27.419 "trsvcid": "4420", 00:24:27.419 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:27.419 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:27.419 "hdgst": false, 00:24:27.419 "ddgst": false 00:24:27.419 }, 00:24:27.419 "method": "bdev_nvme_attach_controller" 00:24:27.419 },{ 00:24:27.419 "params": { 00:24:27.419 "name": "Nvme5", 00:24:27.419 "trtype": "tcp", 00:24:27.419 "traddr": "10.0.0.2", 00:24:27.419 "adrfam": "ipv4", 00:24:27.419 "trsvcid": "4420", 00:24:27.419 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:27.419 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:27.419 "hdgst": false, 00:24:27.419 "ddgst": false 00:24:27.419 }, 00:24:27.419 "method": "bdev_nvme_attach_controller" 00:24:27.419 },{ 00:24:27.419 "params": { 00:24:27.419 "name": "Nvme6", 00:24:27.419 "trtype": "tcp", 00:24:27.419 "traddr": "10.0.0.2", 00:24:27.419 "adrfam": "ipv4", 00:24:27.419 "trsvcid": "4420", 00:24:27.419 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:27.419 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:27.419 "hdgst": false, 00:24:27.419 "ddgst": false 00:24:27.419 }, 00:24:27.419 "method": "bdev_nvme_attach_controller" 00:24:27.419 },{ 00:24:27.419 "params": { 00:24:27.419 "name": "Nvme7", 00:24:27.419 "trtype": "tcp", 00:24:27.419 "traddr": "10.0.0.2", 00:24:27.419 "adrfam": "ipv4", 00:24:27.419 "trsvcid": "4420", 00:24:27.419 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:27.419 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:27.419 "hdgst": false, 00:24:27.419 "ddgst": false 00:24:27.419 }, 00:24:27.419 "method": "bdev_nvme_attach_controller" 00:24:27.419 },{ 00:24:27.419 "params": { 00:24:27.419 "name": "Nvme8", 00:24:27.419 "trtype": "tcp", 00:24:27.419 "traddr": "10.0.0.2", 00:24:27.419 "adrfam": "ipv4", 00:24:27.419 "trsvcid": "4420", 00:24:27.419 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:27.419 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:27.419 "hdgst": false, 00:24:27.419 "ddgst": false 00:24:27.419 }, 00:24:27.419 "method": "bdev_nvme_attach_controller" 00:24:27.419 },{ 00:24:27.419 "params": { 00:24:27.419 "name": "Nvme9", 00:24:27.419 "trtype": "tcp", 00:24:27.419 "traddr": "10.0.0.2", 00:24:27.419 "adrfam": "ipv4", 00:24:27.419 "trsvcid": "4420", 00:24:27.419 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:27.419 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:27.419 "hdgst": false, 00:24:27.419 "ddgst": false 00:24:27.419 }, 00:24:27.419 "method": "bdev_nvme_attach_controller" 00:24:27.419 },{ 00:24:27.419 "params": { 00:24:27.419 "name": "Nvme10", 00:24:27.419 "trtype": "tcp", 00:24:27.419 "traddr": "10.0.0.2", 00:24:27.419 "adrfam": "ipv4", 00:24:27.419 "trsvcid": "4420", 00:24:27.419 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:27.419 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:27.419 "hdgst": false, 00:24:27.419 "ddgst": false 00:24:27.419 }, 00:24:27.419 "method": "bdev_nvme_attach_controller" 00:24:27.419 }' 00:24:27.419 [2024-11-27 09:56:42.721683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.419 [2024-11-27 09:56:42.758063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.805 Running I/O for 10 seconds... 00:24:28.805 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.805 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:28.805 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:28.805 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.805 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:29.067 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:29.328 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:29.328 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:29.328 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:29.328 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:29.328 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.328 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:29.328 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.328 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:29.328 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:29.328 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3949152 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3949152 ']' 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3949152 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.597 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3949152 00:24:29.597 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:29.597 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:29.597 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3949152' 00:24:29.597 killing process with pid 3949152 00:24:29.597 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3949152 00:24:29.597 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3949152 00:24:29.597 [2024-11-27 09:56:45.023129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.597 [2024-11-27 09:56:45.023180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.597 [2024-11-27 09:56:45.023187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.597 [2024-11-27 09:56:45.023192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.597 [2024-11-27 09:56:45.023197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.597 [2024-11-27 09:56:45.023202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.597 [2024-11-27 09:56:45.023208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.597 [2024-11-27 09:56:45.023212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.597 [2024-11-27 09:56:45.023217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.597 [2024-11-27 09:56:45.023222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.023478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89110 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.598 [2024-11-27 09:56:45.024607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.024775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb74e0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.599 [2024-11-27 09:56:45.026993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.026998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.027003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.027007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.027012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.027017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.027021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.027026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.027030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.027035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.027040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.027045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89ad0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 09:56:45.028216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with [2024-11-27 09:56:45.028273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:29.600 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1[2024-11-27 09:56:45.028287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with [2024-11-27 09:56:45.028306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1the state(6) to be set 00:24:29.600 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with [2024-11-27 09:56:45.028363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1the state(6) to be set 00:24:29.600 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.600 [2024-11-27 09:56:45.028377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.600 [2024-11-27 09:56:45.028384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.600 [2024-11-27 09:56:45.028388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-11-27 09:56:45.028459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with [2024-11-27 09:56:45.028487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:29.601 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with [2024-11-27 09:56:45.028498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1the state(6) to be set 00:24:29.601 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89fc0 is same with the state(6) to be set 00:24:29.601 [2024-11-27 09:56:45.028536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.601 [2024-11-27 09:56:45.028849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.601 [2024-11-27 09:56:45.028859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.028866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.028876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.028883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.028892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.028899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.028909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.028916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.028926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.028933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.028942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.028949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.028958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.028966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.028975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.028983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.028993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.029000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.029009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.029017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.029026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.029033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.029042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.029049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.029058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.029066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.029063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.029080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 09:56:45.029086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.029098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with [2024-11-27 09:56:45.029103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:29.602 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.029110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.029116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with [2024-11-27 09:56:45.029122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:29.602 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.029130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.029139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.029144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.029155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.029170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:12[2024-11-27 09:56:45.029177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.029191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with [2024-11-27 09:56:45.029195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:12the state(6) to be set 00:24:29.602 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.029203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.029208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.602 [2024-11-27 09:56:45.029219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.602 [2024-11-27 09:56:45.029225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.602 [2024-11-27 09:56:45.029239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a490 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.029996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.603 [2024-11-27 09:56:45.030201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a960 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.030994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.031209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.604 [2024-11-27 09:56:45.032360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.604 [2024-11-27 09:56:45.032386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.604 [2024-11-27 09:56:45.032401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.604 [2024-11-27 09:56:45.032411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.604 [2024-11-27 09:56:45.032422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.604 [2024-11-27 09:56:45.032431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.604 [2024-11-27 09:56:45.032443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.604 [2024-11-27 09:56:45.032452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.604 [2024-11-27 09:56:45.032463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.604 [2024-11-27 09:56:45.032473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.604 [2024-11-27 09:56:45.032484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.604 [2024-11-27 09:56:45.032493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.604 [2024-11-27 09:56:45.032504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.604 [2024-11-27 09:56:45.032513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.604 [2024-11-27 09:56:45.032524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.604 [2024-11-27 09:56:45.032534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.604 [2024-11-27 09:56:45.032545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.604 [2024-11-27 09:56:45.032554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.604 [2024-11-27 09:56:45.032563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.604 [2024-11-27 09:56:45.032579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.604 [2024-11-27 09:56:45.032589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.604 [2024-11-27 09:56:45.032596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.604 [2024-11-27 09:56:45.032606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.604 [2024-11-27 09:56:45.032614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.032987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.032997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.605 [2024-11-27 09:56:45.033269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.605 [2024-11-27 09:56:45.033278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.033295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.033312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.033328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.033345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.033362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.033378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.033395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.033412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.033428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.033447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.033463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.033480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.033496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.606 [2024-11-27 09:56:45.033504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:29.606 [2024-11-27 09:56:45.034146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd930 (9): Bad file descriptor 00:24:29.606 [2024-11-27 09:56:45.034190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2007b80 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.034301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204f630 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.034405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4fc0 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.034491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.034508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.606 [2024-11-27 09:56:45.034516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.606 [2024-11-27 09:56:45.045952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.045978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.045987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.045996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.046003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.046009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.046016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.046022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.046028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.046039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ae30 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.046998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8b7f0 is same with the state(6) to be set 00:24:29.606 [2024-11-27 09:56:45.050654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.050684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.050694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.050703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.050712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afe610 is same with the state(6) to be set 00:24:29.607 [2024-11-27 09:56:45.050770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.050782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.050791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.050798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.050807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.050814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.050822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.050830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.050837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012d70 is same with the state(6) to be set 00:24:29.607 [2024-11-27 09:56:45.050857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.050866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.050874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.050882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.050890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.050898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.050906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.050914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.050921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdc170 is same with the state(6) to be set 00:24:29.607 [2024-11-27 09:56:45.050947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.050962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.050971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.050979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.050987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.050995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.051005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.607 [2024-11-27 09:56:45.051013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.051020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6cb0 is same with the state(6) to be set 00:24:29.607 [2024-11-27 09:56:45.051081] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:29.607 [2024-11-27 09:56:45.052959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.052982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.052996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.607 [2024-11-27 09:56:45.053386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.607 [2024-11-27 09:56:45.053395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.053984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.053994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.054003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.054014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.054022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.054033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.054044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.054056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.054065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.054076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.054085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.054096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.054105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.608 [2024-11-27 09:56:45.054117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.608 [2024-11-27 09:56:45.054126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.609 [2024-11-27 09:56:45.054136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.609 [2024-11-27 09:56:45.054146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.609 [2024-11-27 09:56:45.054180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.609 [2024-11-27 09:56:45.054312] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:29.609 [2024-11-27 09:56:45.054338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:29.609 [2024-11-27 09:56:45.054364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2012d70 (9): Bad file descriptor 00:24:29.609 [2024-11-27 09:56:45.054410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2007b80 (9): Bad file descriptor 00:24:29.609 [2024-11-27 09:56:45.054451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.609 [2024-11-27 09:56:45.054462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.609 [2024-11-27 09:56:45.054473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.609 [2024-11-27 09:56:45.054482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.609 [2024-11-27 09:56:45.054491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.609 [2024-11-27 09:56:45.054500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.609 [2024-11-27 09:56:45.054509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.609 [2024-11-27 09:56:45.054517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.609 [2024-11-27 09:56:45.054526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202a8a0 is same with the state(6) to be set 00:24:29.609 [2024-11-27 09:56:45.054539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204f630 (9): Bad file descriptor 00:24:29.609 [2024-11-27 09:56:45.054568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.609 [2024-11-27 09:56:45.054579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.609 [2024-11-27 09:56:45.054589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.609 [2024-11-27 09:56:45.054599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.609 [2024-11-27 09:56:45.054609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.609 [2024-11-27 09:56:45.054618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.609 [2024-11-27 09:56:45.054627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.609 [2024-11-27 09:56:45.054636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.609 [2024-11-27 09:56:45.054645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202a6c0 is same with the state(6) to be set 00:24:29.609 [2024-11-27 09:56:45.054667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4fc0 (9): Bad file descriptor 00:24:29.609 [2024-11-27 09:56:45.054687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afe610 (9): Bad file descriptor 00:24:29.609 [2024-11-27 09:56:45.054708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdc170 (9): Bad file descriptor 00:24:29.609 [2024-11-27 09:56:45.054727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6cb0 (9): Bad file descriptor 00:24:29.878 [2024-11-27 09:56:45.056697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:29.878 [2024-11-27 09:56:45.056725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202a8a0 (9): Bad file descriptor 00:24:29.878 [2024-11-27 09:56:45.056956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.878 [2024-11-27 09:56:45.056974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd930 with addr=10.0.0.2, port=4420 00:24:29.878 [2024-11-27 09:56:45.056983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd930 is same with the state(6) to be set 00:24:29.878 [2024-11-27 09:56:45.057053] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:29.878 [2024-11-27 09:56:45.057097] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:29.878 [2024-11-27 09:56:45.057446] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:29.878 [2024-11-27 09:56:45.058654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.878 [2024-11-27 09:56:45.058698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2012d70 with addr=10.0.0.2, port=4420 00:24:29.878 [2024-11-27 09:56:45.058711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012d70 is same with the state(6) to be set 00:24:29.878 [2024-11-27 09:56:45.058744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd930 (9): Bad file descriptor 00:24:29.878 [2024-11-27 09:56:45.059294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.059318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.878 [2024-11-27 09:56:45.059336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.059350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.878 [2024-11-27 09:56:45.059362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.059371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.878 [2024-11-27 09:56:45.059382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.059391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.878 [2024-11-27 09:56:45.059402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.059411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.878 [2024-11-27 09:56:45.059420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedc20 is same with the state(6) to be set 00:24:29.878 [2024-11-27 09:56:45.059773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.878 [2024-11-27 09:56:45.059792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202a8a0 with addr=10.0.0.2, port=4420 00:24:29.878 [2024-11-27 09:56:45.059801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202a8a0 is same with the state(6) to be set 00:24:29.878 [2024-11-27 09:56:45.059813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2012d70 (9): Bad file descriptor 00:24:29.878 [2024-11-27 09:56:45.059824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:29.878 [2024-11-27 09:56:45.059832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:29.878 [2024-11-27 09:56:45.059841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:29.878 [2024-11-27 09:56:45.059851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:29.878 [2024-11-27 09:56:45.059936] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:29.878 [2024-11-27 09:56:45.061018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:24:29.878 [2024-11-27 09:56:45.061040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202a6c0 (9): Bad file descriptor 00:24:29.878 [2024-11-27 09:56:45.061053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202a8a0 (9): Bad file descriptor 00:24:29.878 [2024-11-27 09:56:45.061063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:29.878 [2024-11-27 09:56:45.061070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:29.878 [2024-11-27 09:56:45.061079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:29.878 [2024-11-27 09:56:45.061087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:29.878 [2024-11-27 09:56:45.061169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:29.878 [2024-11-27 09:56:45.061181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:29.878 [2024-11-27 09:56:45.061190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:29.878 [2024-11-27 09:56:45.061197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:29.878 [2024-11-27 09:56:45.061824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.878 [2024-11-27 09:56:45.061842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202a6c0 with addr=10.0.0.2, port=4420 00:24:29.878 [2024-11-27 09:56:45.061850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202a6c0 is same with the state(6) to be set 00:24:29.878 [2024-11-27 09:56:45.061900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202a6c0 (9): Bad file descriptor 00:24:29.878 [2024-11-27 09:56:45.061949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:24:29.878 [2024-11-27 09:56:45.061958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:24:29.878 [2024-11-27 09:56:45.061966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:24:29.878 [2024-11-27 09:56:45.061974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:24:29.878 [2024-11-27 09:56:45.064497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.064512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.878 [2024-11-27 09:56:45.064525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.064532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.878 [2024-11-27 09:56:45.064542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.064550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.878 [2024-11-27 09:56:45.064560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.064567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.878 [2024-11-27 09:56:45.064577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.064585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.878 [2024-11-27 09:56:45.064594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.064602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.878 [2024-11-27 09:56:45.064612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.064619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.878 [2024-11-27 09:56:45.064629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.064637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.878 [2024-11-27 09:56:45.064648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.878 [2024-11-27 09:56:45.064656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.064988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.064997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.879 [2024-11-27 09:56:45.065360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.879 [2024-11-27 09:56:45.065370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.065643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.065651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4790 is same with the state(6) to be set 00:24:29.880 [2024-11-27 09:56:45.066934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.066948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.066961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.066972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.066984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.066993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.880 [2024-11-27 09:56:45.067351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.880 [2024-11-27 09:56:45.067360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.067979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.067992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.068001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.068009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.068019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.068026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.068036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.068046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.068056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.068063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.881 [2024-11-27 09:56:45.068073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.881 [2024-11-27 09:56:45.068081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.068091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.068099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.068108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1debe40 is same with the state(6) to be set 00:24:29.882 [2024-11-27 09:56:45.069401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.882 [2024-11-27 09:56:45.069964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.882 [2024-11-27 09:56:45.069975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.069983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.069993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.070551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.070559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70f0 is same with the state(6) to be set 00:24:29.883 [2024-11-27 09:56:45.071896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.071912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.071924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.071932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.071943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.071950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.071960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.071970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.071981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.071989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.072000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.072008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.883 [2024-11-27 09:56:45.072017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.883 [2024-11-27 09:56:45.072025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.884 [2024-11-27 09:56:45.072721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.884 [2024-11-27 09:56:45.072729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.072985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.072992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.073002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.073009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.073019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.073027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.073036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.073044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.073053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9bf0 is same with the state(6) to be set 00:24:29.885 [2024-11-27 09:56:45.074333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.885 [2024-11-27 09:56:45.074722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.885 [2024-11-27 09:56:45.074732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.074987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.074999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.886 [2024-11-27 09:56:45.075427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.886 [2024-11-27 09:56:45.075434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.075444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.075451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.075461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.075469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.075478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.075486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.075494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb170 is same with the state(6) to be set 00:24:29.887 [2024-11-27 09:56:45.076764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.076777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.076789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.076798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.076809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.076816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.076826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.076834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.076844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.076852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.076862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.076869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.076880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.076887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.076897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.076905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.076915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.076923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.076933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.076940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.076950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.076958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.076968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.076976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.076987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.076997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.887 [2024-11-27 09:56:45.077315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.887 [2024-11-27 09:56:45.077323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.888 [2024-11-27 09:56:45.077902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.888 [2024-11-27 09:56:45.077910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27660 is same with the state(6) to be set 00:24:29.888 [2024-11-27 09:56:45.080222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:29.888 [2024-11-27 09:56:45.080245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:29.888 [2024-11-27 09:56:45.080256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:29.888 [2024-11-27 09:56:45.080324] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:24:29.888 [2024-11-27 09:56:45.080339] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:24:29.888 [2024-11-27 09:56:45.080352] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:24:29.888 [2024-11-27 09:56:45.080365] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:24:29.888 [2024-11-27 09:56:45.080433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:29.888 [2024-11-27 09:56:45.080445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:24:29.888 task offset: 29568 on job bdev=Nvme2n1 fails 00:24:29.888 00:24:29.888 Latency(us) 00:24:29.888 [2024-11-27T08:56:45.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.888 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.888 Job: Nvme1n1 ended in about 0.96 seconds with error 00:24:29.888 Verification LBA range: start 0x0 length 0x400 00:24:29.889 Nvme1n1 : 0.96 199.35 12.46 66.45 0.00 238063.15 25122.13 235929.60 00:24:29.889 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.889 Job: Nvme2n1 ended in about 0.93 seconds with error 00:24:29.889 Verification LBA range: start 0x0 length 0x400 00:24:29.889 Nvme2n1 : 0.93 206.76 12.92 68.92 0.00 224602.72 5570.56 265639.25 00:24:29.889 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.889 Job: Nvme3n1 ended in about 0.97 seconds with error 00:24:29.889 Verification LBA range: start 0x0 length 0x400 00:24:29.889 Nvme3n1 : 0.97 198.84 12.43 66.28 0.00 229039.36 16711.68 223696.21 00:24:29.889 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.889 Job: Nvme4n1 ended in about 0.97 seconds with error 00:24:29.889 Verification LBA range: start 0x0 length 0x400 00:24:29.889 Nvme4n1 : 0.97 132.23 8.26 66.11 0.00 299819.24 19879.25 256901.12 00:24:29.889 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.889 Job: Nvme5n1 ended in about 0.95 seconds with error 00:24:29.889 Verification LBA range: start 0x0 length 0x400 00:24:29.889 Nvme5n1 : 0.95 205.54 12.85 67.46 0.00 212680.35 18786.99 219327.15 00:24:29.889 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.889 Job: Nvme6n1 ended in about 0.97 seconds with error 00:24:29.889 Verification LBA range: start 0x0 length 0x400 00:24:29.889 Nvme6n1 : 0.97 131.89 8.24 65.94 0.00 287838.44 16602.45 249910.61 00:24:29.889 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.889 Job: Nvme7n1 ended in about 0.97 seconds with error 00:24:29.889 Verification LBA range: start 0x0 length 0x400 00:24:29.889 Nvme7n1 : 0.97 197.34 12.33 65.78 0.00 211527.89 23156.05 237677.23 00:24:29.889 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.889 Job: Nvme8n1 ended in about 0.95 seconds with error 00:24:29.889 Verification LBA range: start 0x0 length 0x400 00:24:29.889 Nvme8n1 : 0.95 201.51 12.59 67.17 0.00 201643.01 2730.67 269134.51 00:24:29.889 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.889 Job: Nvme9n1 ended in about 0.96 seconds with error 00:24:29.889 Verification LBA range: start 0x0 length 0x400 00:24:29.889 Nvme9n1 : 0.96 200.59 12.54 5.22 0.00 256706.51 31675.73 270882.13 00:24:29.889 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.889 Job: Nvme10n1 ended in about 0.98 seconds with error 00:24:29.889 Verification LBA range: start 0x0 length 0x400 00:24:29.889 Nvme10n1 : 0.98 131.23 8.20 65.62 0.00 263598.65 23811.41 265639.25 00:24:29.889 [2024-11-27T08:56:45.355Z] =================================================================================================================== 00:24:29.889 [2024-11-27T08:56:45.355Z] Total : 1805.28 112.83 604.96 0.00 238730.49 2730.67 270882.13 00:24:29.889 [2024-11-27 09:56:45.107548] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:29.889 [2024-11-27 09:56:45.107578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:29.889 [2024-11-27 09:56:45.107591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:29.889 [2024-11-27 09:56:45.107869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.889 [2024-11-27 09:56:45.107887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6cb0 with addr=10.0.0.2, port=4420 00:24:29.889 [2024-11-27 09:56:45.107897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6cb0 is same with the state(6) to be set 00:24:29.889 [2024-11-27 09:56:45.108121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.889 [2024-11-27 09:56:45.108132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdc170 with addr=10.0.0.2, port=4420 00:24:29.889 [2024-11-27 09:56:45.108139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdc170 is same with the state(6) to be set 00:24:29.889 [2024-11-27 09:56:45.108450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.889 [2024-11-27 09:56:45.108462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be4fc0 with addr=10.0.0.2, port=4420 00:24:29.889 [2024-11-27 09:56:45.108469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4fc0 is same with the state(6) to be set 00:24:29.889 [2024-11-27 09:56:45.110073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:29.889 [2024-11-27 09:56:45.110087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:29.889 [2024-11-27 09:56:45.110440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.889 [2024-11-27 09:56:45.110455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afe610 with addr=10.0.0.2, port=4420 00:24:29.889 [2024-11-27 09:56:45.110462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afe610 is same with the state(6) to be set 00:24:29.889 [2024-11-27 09:56:45.110527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.889 [2024-11-27 09:56:45.110537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2007b80 with addr=10.0.0.2, port=4420 00:24:29.889 [2024-11-27 09:56:45.110544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2007b80 is same with the state(6) to be set 00:24:29.889 [2024-11-27 09:56:45.110845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.889 [2024-11-27 09:56:45.110859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204f630 with addr=10.0.0.2, port=4420 00:24:29.889 [2024-11-27 09:56:45.110867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204f630 is same with the state(6) to be set 00:24:29.889 [2024-11-27 09:56:45.111151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.889 [2024-11-27 09:56:45.111166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd930 with addr=10.0.0.2, port=4420 00:24:29.889 [2024-11-27 09:56:45.111175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd930 is same with the state(6) to be set 00:24:29.889 [2024-11-27 09:56:45.111188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6cb0 (9): Bad file descriptor 00:24:29.889 [2024-11-27 09:56:45.111199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdc170 (9): Bad file descriptor 00:24:29.889 [2024-11-27 09:56:45.111209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4fc0 (9): Bad file descriptor 00:24:29.889 [2024-11-27 09:56:45.111237] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:24:29.889 [2024-11-27 09:56:45.111256] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:24:29.889 [2024-11-27 09:56:45.111268] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:24:29.889 [2024-11-27 09:56:45.111279] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:24:29.889 [2024-11-27 09:56:45.111344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:24:29.889 [2024-11-27 09:56:45.111677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.889 [2024-11-27 09:56:45.111690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2012d70 with addr=10.0.0.2, port=4420 00:24:29.889 [2024-11-27 09:56:45.111698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012d70 is same with the state(6) to be set 00:24:29.889 [2024-11-27 09:56:45.112006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.889 [2024-11-27 09:56:45.112017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202a8a0 with addr=10.0.0.2, port=4420 00:24:29.889 [2024-11-27 09:56:45.112025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202a8a0 is same with the state(6) to be set 00:24:29.889 [2024-11-27 09:56:45.112035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afe610 (9): Bad file descriptor 00:24:29.889 [2024-11-27 09:56:45.112045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2007b80 (9): Bad file descriptor 00:24:29.889 [2024-11-27 09:56:45.112054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204f630 (9): Bad file descriptor 00:24:29.889 [2024-11-27 09:56:45.112063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd930 (9): Bad file descriptor 00:24:29.889 [2024-11-27 09:56:45.112073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:29.889 [2024-11-27 09:56:45.112080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:29.889 [2024-11-27 09:56:45.112089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:29.889 [2024-11-27 09:56:45.112097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:29.889 [2024-11-27 09:56:45.112105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:29.889 [2024-11-27 09:56:45.112116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:29.889 [2024-11-27 09:56:45.112123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:29.889 [2024-11-27 09:56:45.112129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:29.889 [2024-11-27 09:56:45.112136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:29.889 [2024-11-27 09:56:45.112144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:29.889 [2024-11-27 09:56:45.112152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:29.889 [2024-11-27 09:56:45.112162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:29.889 [2024-11-27 09:56:45.112543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.889 [2024-11-27 09:56:45.112555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202a6c0 with addr=10.0.0.2, port=4420 00:24:29.889 [2024-11-27 09:56:45.112563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202a6c0 is same with the state(6) to be set 00:24:29.889 [2024-11-27 09:56:45.112571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2012d70 (9): Bad file descriptor 00:24:29.889 [2024-11-27 09:56:45.112581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202a8a0 (9): Bad file descriptor 00:24:29.889 [2024-11-27 09:56:45.112590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:29.889 [2024-11-27 09:56:45.112597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:29.889 [2024-11-27 09:56:45.112605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:29.890 [2024-11-27 09:56:45.112611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:29.890 [2024-11-27 09:56:45.112619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:24:29.890 [2024-11-27 09:56:45.112626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:24:29.890 [2024-11-27 09:56:45.112633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:24:29.890 [2024-11-27 09:56:45.112640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:24:29.890 [2024-11-27 09:56:45.112647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:29.890 [2024-11-27 09:56:45.112654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:29.890 [2024-11-27 09:56:45.112661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:29.890 [2024-11-27 09:56:45.112668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:29.890 [2024-11-27 09:56:45.112676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:29.890 [2024-11-27 09:56:45.112682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:29.890 [2024-11-27 09:56:45.112689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:29.890 [2024-11-27 09:56:45.112695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:29.890 [2024-11-27 09:56:45.112723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202a6c0 (9): Bad file descriptor 00:24:29.890 [2024-11-27 09:56:45.112737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:29.890 [2024-11-27 09:56:45.112744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:29.890 [2024-11-27 09:56:45.112752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:29.890 [2024-11-27 09:56:45.112758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:29.890 [2024-11-27 09:56:45.112765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:29.890 [2024-11-27 09:56:45.112772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:29.890 [2024-11-27 09:56:45.112779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:29.890 [2024-11-27 09:56:45.112786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:29.890 [2024-11-27 09:56:45.112812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:24:29.890 [2024-11-27 09:56:45.112819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:24:29.890 [2024-11-27 09:56:45.112827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:24:29.890 [2024-11-27 09:56:45.112833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:24:29.890 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3949481 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3949481 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3949481 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:30.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:31.096 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:31.096 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:31.096 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:31.097 rmmod nvme_tcp 00:24:31.097 rmmod nvme_fabrics 00:24:31.097 rmmod nvme_keyring 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3949152 ']' 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3949152 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3949152 ']' 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3949152 00:24:31.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3949152) - No such process 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3949152 is not found' 00:24:31.097 Process with pid 3949152 is not found 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.097 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.012 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:33.012 00:24:33.012 real 0m7.697s 00:24:33.012 user 0m18.534s 00:24:33.012 sys 0m1.282s 00:24:33.012 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:33.012 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:33.012 ************************************ 00:24:33.012 END TEST nvmf_shutdown_tc3 00:24:33.012 ************************************ 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:33.274 ************************************ 00:24:33.274 START TEST nvmf_shutdown_tc4 00:24:33.274 ************************************ 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:33.274 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:33.275 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:33.275 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:33.275 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:33.275 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.275 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.276 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:33.276 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:33.276 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.276 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:33.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:24:33.537 00:24:33.537 --- 10.0.0.2 ping statistics --- 00:24:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.537 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:24:33.537 00:24:33.537 --- 10.0.0.1 ping statistics --- 00:24:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.537 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:33.537 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3950678 00:24:33.538 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3950678 00:24:33.538 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:33.538 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3950678 ']' 00:24:33.538 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.538 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.538 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.538 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.538 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:33.800 [2024-11-27 09:56:49.006529] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:24:33.800 [2024-11-27 09:56:49.006593] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.800 [2024-11-27 09:56:49.103250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.800 [2024-11-27 09:56:49.136994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.800 [2024-11-27 09:56:49.137024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.800 [2024-11-27 09:56:49.137031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.800 [2024-11-27 09:56:49.137036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.800 [2024-11-27 09:56:49.137040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.800 [2024-11-27 09:56:49.138389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.800 [2024-11-27 09:56:49.138596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.800 [2024-11-27 09:56:49.138745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.800 [2024-11-27 09:56:49.138746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:34.371 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.371 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:24:34.371 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:34.371 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.371 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:34.633 [2024-11-27 09:56:49.857507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.633 09:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:34.633 Malloc1 00:24:34.633 [2024-11-27 09:56:49.969810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.633 Malloc2 00:24:34.633 Malloc3 00:24:34.633 Malloc4 00:24:34.895 Malloc5 00:24:34.895 Malloc6 00:24:34.895 Malloc7 00:24:34.895 Malloc8 00:24:34.895 Malloc9 00:24:34.895 Malloc10 00:24:34.895 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.895 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:34.895 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.895 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:35.156 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3951057 00:24:35.156 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:35.156 09:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:35.156 [2024-11-27 09:56:50.450749] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:40.456 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:40.456 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3950678 00:24:40.456 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3950678 ']' 00:24:40.456 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3950678 00:24:40.456 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:40.456 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.456 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3950678 00:24:40.456 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:40.456 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:40.456 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3950678' 00:24:40.456 killing process with pid 3950678 00:24:40.456 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3950678 00:24:40.456 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3950678 00:24:40.456 [2024-11-27 09:56:55.446487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f670 is same with the state(6) to be set 00:24:40.456 [2024-11-27 09:56:55.446533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f670 is same with the state(6) to be set 00:24:40.456 [2024-11-27 09:56:55.446540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f670 is same with the state(6) to be set 00:24:40.456 [2024-11-27 09:56:55.446545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f670 is same with the state(6) to be set 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 starting I/O failed: -6 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 starting I/O failed: -6 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 starting I/O failed: -6 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 starting I/O failed: -6 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 starting I/O failed: -6 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 starting I/O failed: -6 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 starting I/O failed: -6 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 starting I/O failed: -6 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.456 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 [2024-11-27 09:56:55.450058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 [2024-11-27 09:56:55.450784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfff90 is same with the state(6) to be set 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 [2024-11-27 09:56:55.450810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfff90 is same with the state(6) to be set 00:24:40.457 [2024-11-27 09:56:55.450816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfff90 is same with the state(6) to be set 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 [2024-11-27 09:56:55.450821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfff90 is same with the state(6) to be set 00:24:40.457 [2024-11-27 09:56:55.450827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfff90 is same with tstarting I/O failed: -6 00:24:40.457 he state(6) to be set 00:24:40.457 [2024-11-27 09:56:55.450838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfff90 is same with the state(6) to be set 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 [2024-11-27 09:56:55.450843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfff90 is same with the state(6) to be set 00:24:40.457 starting I/O failed: -6 00:24:40.457 [2024-11-27 09:56:55.450848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfff90 is same with the state(6) to be set 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 [2024-11-27 09:56:55.450982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:40.457 [2024-11-27 09:56:55.451041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00480 is same with the state(6) to be set 00:24:40.457 [2024-11-27 09:56:55.451064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00480 is same with the state(6) to be set 00:24:40.457 [2024-11-27 09:56:55.451070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00480 is same with the state(6) to be set 00:24:40.457 [2024-11-27 09:56:55.451075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00480 is same with the state(6) to be set 00:24:40.457 [2024-11-27 09:56:55.451081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00480 is same with the state(6) to be set 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 starting I/O failed: -6 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 [2024-11-27 09:56:55.451362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00970 is same with the state(6) to be set 00:24:40.457 starting I/O failed: -6 00:24:40.457 [2024-11-27 09:56:55.451379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00970 is same with the state(6) to be set 00:24:40.457 Write completed with error (sct=0, sc=8) 00:24:40.457 [2024-11-27 09:56:55.451384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00970 is same with the state(6) to be set 00:24:40.457 [2024-11-27 09:56:55.451390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00970 is same with the state(6) to be set 00:24:40.458 [2024-11-27 09:56:55.451395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00970 is same with the state(6) to be set 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 [2024-11-27 09:56:55.451600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffac0 is same with tstarting I/O failed: -6 00:24:40.458 he state(6) to be set 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 [2024-11-27 09:56:55.451622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffac0 is same with the state(6) to be set 00:24:40.458 starting I/O failed: -6 00:24:40.458 [2024-11-27 09:56:55.451628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffac0 is same with the state(6) to be set 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 [2024-11-27 09:56:55.451633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffac0 is same with the state(6) to be set 00:24:40.458 [2024-11-27 09:56:55.451639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffac0 is same with the state(6) to be set 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 [2024-11-27 09:56:55.451879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.458 starting I/O failed: -6 00:24:40.458 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 [2024-11-27 09:56:55.453255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:40.459 NVMe io qpair process completion error 00:24:40.459 [2024-11-27 09:56:55.453969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff5d0 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.453986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff5d0 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.453991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff5d0 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.453997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff5d0 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.454002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff5d0 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.454007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff5d0 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.454012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff5d0 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.454017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff5d0 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.454022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff5d0 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.454027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff5d0 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.455896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02690 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.455914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02690 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.456164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b60 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.456182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b60 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.456188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b60 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.456192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b60 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.456197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b60 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.456239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03030 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.456258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03030 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.456263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03030 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.456268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03030 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.456272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03030 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.456278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03030 is same with the state(6) to be set 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 [2024-11-27 09:56:55.456581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc021c0 is same with the state(6) to be set 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 [2024-11-27 09:56:55.456598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc021c0 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.456604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc021c0 is same with the state(6) to be set 00:24:40.459 [2024-11-27 09:56:55.456609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc021c0 is same with the state(6) to be set 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 [2024-11-27 09:56:55.457060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:40.459 starting I/O failed: -6 00:24:40.459 starting I/O failed: -6 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.459 starting I/O failed: -6 00:24:40.459 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 [2024-11-27 09:56:55.458082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 [2024-11-27 09:56:55.459010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.460 Write completed with error (sct=0, sc=8) 00:24:40.460 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 [2024-11-27 09:56:55.460452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:40.461 NVMe io qpair process completion error 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 [2024-11-27 09:56:55.461517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.461 starting I/O failed: -6 00:24:40.461 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 [2024-11-27 09:56:55.462335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 [2024-11-27 09:56:55.463259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.462 starting I/O failed: -6 00:24:40.462 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 [2024-11-27 09:56:55.465549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:40.463 NVMe io qpair process completion error 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 starting I/O failed: -6 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.463 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 [2024-11-27 09:56:55.466801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 [2024-11-27 09:56:55.467599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 Write completed with error (sct=0, sc=8) 00:24:40.464 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 [2024-11-27 09:56:55.468521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 [2024-11-27 09:56:55.470258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:40.465 NVMe io qpair process completion error 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 starting I/O failed: -6 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.465 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 [2024-11-27 09:56:55.471328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 [2024-11-27 09:56:55.472252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 starting I/O failed: -6 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.466 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 [2024-11-27 09:56:55.473152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 [2024-11-27 09:56:55.474773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:40.467 NVMe io qpair process completion error 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 starting I/O failed: -6 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.467 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 [2024-11-27 09:56:55.475958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 [2024-11-27 09:56:55.476822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.468 starting I/O failed: -6 00:24:40.468 Write completed with error (sct=0, sc=8) 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 [2024-11-27 09:56:55.477761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.469 starting I/O failed: -6 00:24:40.469 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 [2024-11-27 09:56:55.480834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:40.470 NVMe io qpair process completion error 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 [2024-11-27 09:56:55.481929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 [2024-11-27 09:56:55.482741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.470 starting I/O failed: -6 00:24:40.470 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 [2024-11-27 09:56:55.483668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.471 starting I/O failed: -6 00:24:40.471 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 [2024-11-27 09:56:55.485119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:40.472 NVMe io qpair process completion error 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 [2024-11-27 09:56:55.486305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 [2024-11-27 09:56:55.487144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 starting I/O failed: -6 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.472 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 [2024-11-27 09:56:55.488099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:40.473 starting I/O failed: -6 00:24:40.473 starting I/O failed: -6 00:24:40.473 starting I/O failed: -6 00:24:40.473 starting I/O failed: -6 00:24:40.473 starting I/O failed: -6 00:24:40.473 starting I/O failed: -6 00:24:40.473 starting I/O failed: -6 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.473 Write completed with error (sct=0, sc=8) 00:24:40.473 starting I/O failed: -6 00:24:40.474 [2024-11-27 09:56:55.490776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:40.474 NVMe io qpair process completion error 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 [2024-11-27 09:56:55.492065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 [2024-11-27 09:56:55.492866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.474 Write completed with error (sct=0, sc=8) 00:24:40.474 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 [2024-11-27 09:56:55.493802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.475 starting I/O failed: -6 00:24:40.475 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 [2024-11-27 09:56:55.495681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:40.476 NVMe io qpair process completion error 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.476 starting I/O failed: -6 00:24:40.476 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 [2024-11-27 09:56:55.498129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.477 starting I/O failed: -6 00:24:40.477 Write completed with error (sct=0, sc=8) 00:24:40.478 starting I/O failed: -6 00:24:40.478 Write completed with error (sct=0, sc=8) 00:24:40.478 starting I/O failed: -6 00:24:40.478 [2024-11-27 09:56:55.500758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:40.478 NVMe io qpair process completion error 00:24:40.478 Initializing NVMe Controllers 00:24:40.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:40.478 Controller IO queue size 128, less than required. 00:24:40.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:40.478 Controller IO queue size 128, less than required. 00:24:40.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:40.478 Controller IO queue size 128, less than required. 00:24:40.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:40.478 Controller IO queue size 128, less than required. 00:24:40.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:40.478 Controller IO queue size 128, less than required. 00:24:40.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:40.478 Controller IO queue size 128, less than required. 00:24:40.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:40.478 Controller IO queue size 128, less than required. 00:24:40.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:40.478 Controller IO queue size 128, less than required. 00:24:40.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.478 Controller IO queue size 128, less than required. 00:24:40.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:40.478 Controller IO queue size 128, less than required. 00:24:40.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:40.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:40.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:40.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:40.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:40.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:40.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:40.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:40.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:40.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:40.478 Initialization complete. Launching workers. 00:24:40.478 ======================================================== 00:24:40.478 Latency(us) 00:24:40.478 Device Information : IOPS MiB/s Average min max 00:24:40.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1916.02 82.33 66820.35 797.09 124434.38 00:24:40.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1908.96 82.03 67088.95 656.55 124936.81 00:24:40.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1902.11 81.73 67364.46 819.53 124535.24 00:24:40.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1869.31 80.32 67906.99 912.72 125075.64 00:24:40.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1874.91 80.56 68314.67 481.15 129282.45 00:24:40.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1880.93 80.82 67501.15 589.61 124082.45 00:24:40.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1900.65 81.67 66816.50 800.17 124661.13 00:24:40.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1901.69 81.71 66810.79 848.66 126787.15 00:24:40.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1883.63 80.94 67472.20 818.38 119628.21 00:24:40.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1857.68 79.82 68438.67 620.47 127046.22 00:24:40.478 ======================================================== 00:24:40.478 Total : 18895.90 811.93 67448.52 481.15 129282.45 00:24:40.478 00:24:40.478 [2024-11-27 09:56:55.505426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0dbc0 is same with the state(6) to be set 00:24:40.478 [2024-11-27 09:56:55.505471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0e410 is same with the state(6) to be set 00:24:40.478 [2024-11-27 09:56:55.505504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0d890 is same with the state(6) to be set 00:24:40.478 [2024-11-27 09:56:55.505534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f900 is same with the state(6) to be set 00:24:40.478 [2024-11-27 09:56:55.505564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ea70 is same with the state(6) to be set 00:24:40.478 [2024-11-27 09:56:55.505594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0fae0 is same with the state(6) to be set 00:24:40.478 [2024-11-27 09:56:55.505622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0e740 is same with the state(6) to be set 00:24:40.478 [2024-11-27 09:56:55.505651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0def0 is same with the state(6) to be set 00:24:40.478 [2024-11-27 09:56:55.505680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f720 is same with the state(6) to be set 00:24:40.478 [2024-11-27 09:56:55.505708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0d560 is same with the state(6) to be set 00:24:40.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:40.478 09:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3951057 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3951057 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3951057 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:41.423 rmmod nvme_tcp 00:24:41.423 rmmod nvme_fabrics 00:24:41.423 rmmod nvme_keyring 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3950678 ']' 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3950678 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3950678 ']' 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3950678 00:24:41.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3950678) - No such process 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3950678 is not found' 00:24:41.423 Process with pid 3950678 is not found 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.423 09:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.971 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:43.971 00:24:43.971 real 0m10.306s 00:24:43.971 user 0m27.849s 00:24:43.971 sys 0m4.145s 00:24:43.971 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.971 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:43.971 ************************************ 00:24:43.971 END TEST nvmf_shutdown_tc4 00:24:43.971 ************************************ 00:24:43.971 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:43.971 00:24:43.971 real 0m43.701s 00:24:43.971 user 1m45.949s 00:24:43.971 sys 0m14.178s 00:24:43.971 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.971 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:43.971 ************************************ 00:24:43.971 END TEST nvmf_shutdown 00:24:43.971 ************************************ 00:24:43.971 09:56:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:43.971 09:56:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:43.971 09:56:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.971 09:56:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:43.971 ************************************ 00:24:43.971 START TEST nvmf_nsid 00:24:43.971 ************************************ 00:24:43.971 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:43.971 * Looking for test storage... 00:24:43.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:43.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.971 --rc genhtml_branch_coverage=1 00:24:43.971 --rc genhtml_function_coverage=1 00:24:43.971 --rc genhtml_legend=1 00:24:43.971 --rc geninfo_all_blocks=1 00:24:43.971 --rc geninfo_unexecuted_blocks=1 00:24:43.971 00:24:43.971 ' 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:43.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.971 --rc genhtml_branch_coverage=1 00:24:43.971 --rc genhtml_function_coverage=1 00:24:43.971 --rc genhtml_legend=1 00:24:43.971 --rc geninfo_all_blocks=1 00:24:43.971 --rc geninfo_unexecuted_blocks=1 00:24:43.971 00:24:43.971 ' 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:43.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.971 --rc genhtml_branch_coverage=1 00:24:43.971 --rc genhtml_function_coverage=1 00:24:43.971 --rc genhtml_legend=1 00:24:43.971 --rc geninfo_all_blocks=1 00:24:43.971 --rc geninfo_unexecuted_blocks=1 00:24:43.971 00:24:43.971 ' 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:43.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.971 --rc genhtml_branch_coverage=1 00:24:43.971 --rc genhtml_function_coverage=1 00:24:43.971 --rc genhtml_legend=1 00:24:43.971 --rc geninfo_all_blocks=1 00:24:43.971 --rc geninfo_unexecuted_blocks=1 00:24:43.971 00:24:43.971 ' 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:43.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:43.971 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:43.972 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.972 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:43.972 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:43.972 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:43.972 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.972 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.972 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.972 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:43.972 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:43.972 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:43.972 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:52.364 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:52.364 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:52.364 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:52.365 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:52.365 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:52.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:24:52.365 00:24:52.365 --- 10.0.0.2 ping statistics --- 00:24:52.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.365 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:24:52.365 00:24:52.365 --- 10.0.0.1 ping statistics --- 00:24:52.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.365 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3956526 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3956526 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3956526 ']' 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.365 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:52.365 [2024-11-27 09:57:06.849405] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:24:52.365 [2024-11-27 09:57:06.849472] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.365 [2024-11-27 09:57:06.947567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.365 [2024-11-27 09:57:06.999089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.365 [2024-11-27 09:57:06.999141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.365 [2024-11-27 09:57:06.999150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.365 [2024-11-27 09:57:06.999166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.365 [2024-11-27 09:57:06.999173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.365 [2024-11-27 09:57:06.999895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3956823 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.365 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.366 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.366 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:52.366 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:52.366 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=020e3e45-c121-4f15-ac19-71758115318d 00:24:52.366 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:52.366 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=4ae26194-2d05-40e4-a353-7a78b5719068 00:24:52.366 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:52.366 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=986ac402-307c-432e-8e35-4c05586b2732 00:24:52.366 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:52.366 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.366 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:52.366 null0 00:24:52.366 null1 00:24:52.366 [2024-11-27 09:57:07.762558] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:24:52.366 [2024-11-27 09:57:07.762624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956823 ] 00:24:52.366 null2 00:24:52.366 [2024-11-27 09:57:07.770000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.366 [2024-11-27 09:57:07.794309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.626 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.626 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3956823 /var/tmp/tgt2.sock 00:24:52.626 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3956823 ']' 00:24:52.626 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:52.626 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.626 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:52.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:52.626 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.626 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:52.626 [2024-11-27 09:57:07.851543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.626 [2024-11-27 09:57:07.904541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.887 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:52.887 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:52.887 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:53.148 [2024-11-27 09:57:08.459142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.148 [2024-11-27 09:57:08.475344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:53.148 nvme0n1 nvme0n2 00:24:53.148 nvme1n1 00:24:53.148 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:53.148 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:53.148 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:54.533 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:54.533 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:54.533 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:54.533 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:54.533 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:54.533 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:54.533 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:54.533 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:54.533 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:54.533 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:54.533 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:54.533 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:54.533 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:55.921 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:55.921 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:55.921 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:55.921 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:55.921 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:55.921 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 020e3e45-c121-4f15-ac19-71758115318d 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=020e3e45c1214f15ac1971758115318d 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 020E3E45C1214F15AC1971758115318D 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 020E3E45C1214F15AC1971758115318D == \0\2\0\E\3\E\4\5\C\1\2\1\4\F\1\5\A\C\1\9\7\1\7\5\8\1\1\5\3\1\8\D ]] 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 4ae26194-2d05-40e4-a353-7a78b5719068 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4ae261942d0540e4a3537a78b5719068 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4AE261942D0540E4A3537A78B5719068 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 4AE261942D0540E4A3537A78B5719068 == \4\A\E\2\6\1\9\4\2\D\0\5\4\0\E\4\A\3\5\3\7\A\7\8\B\5\7\1\9\0\6\8 ]] 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 986ac402-307c-432e-8e35-4c05586b2732 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=986ac402307c432e8e354c05586b2732 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 986AC402307C432E8E354C05586B2732 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 986AC402307C432E8E354C05586B2732 == \9\8\6\A\C\4\0\2\3\0\7\C\4\3\2\E\8\E\3\5\4\C\0\5\5\8\6\B\2\7\3\2 ]] 00:24:55.921 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:56.182 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:56.182 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:56.182 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3956823 00:24:56.182 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3956823 ']' 00:24:56.182 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3956823 00:24:56.182 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:56.182 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.182 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3956823 00:24:56.182 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:56.182 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:56.182 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3956823' 00:24:56.182 killing process with pid 3956823 00:24:56.182 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3956823 00:24:56.182 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3956823 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:56.442 rmmod nvme_tcp 00:24:56.442 rmmod nvme_fabrics 00:24:56.442 rmmod nvme_keyring 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3956526 ']' 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3956526 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3956526 ']' 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3956526 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3956526 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3956526' 00:24:56.442 killing process with pid 3956526 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3956526 00:24:56.442 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3956526 00:24:56.702 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:56.702 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:56.702 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:56.702 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:56.702 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:56.702 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:56.702 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:56.702 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:56.702 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:56.702 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.702 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.702 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.612 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:58.612 00:24:58.612 real 0m15.011s 00:24:58.612 user 0m11.438s 00:24:58.612 sys 0m6.901s 00:24:58.612 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.612 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:58.612 ************************************ 00:24:58.612 END TEST nvmf_nsid 00:24:58.612 ************************************ 00:24:58.612 09:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:58.612 00:24:58.612 real 13m5.750s 00:24:58.612 user 27m19.574s 00:24:58.612 sys 3m56.773s 00:24:58.612 09:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.612 09:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:58.612 ************************************ 00:24:58.612 END TEST nvmf_target_extra 00:24:58.612 ************************************ 00:24:58.872 09:57:14 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:58.872 09:57:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:58.872 09:57:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.872 09:57:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:58.872 ************************************ 00:24:58.872 START TEST nvmf_host 00:24:58.872 ************************************ 00:24:58.872 09:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:58.872 * Looking for test storage... 00:24:58.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:58.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.873 --rc genhtml_branch_coverage=1 00:24:58.873 --rc genhtml_function_coverage=1 00:24:58.873 --rc genhtml_legend=1 00:24:58.873 --rc geninfo_all_blocks=1 00:24:58.873 --rc geninfo_unexecuted_blocks=1 00:24:58.873 00:24:58.873 ' 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:58.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.873 --rc genhtml_branch_coverage=1 00:24:58.873 --rc genhtml_function_coverage=1 00:24:58.873 --rc genhtml_legend=1 00:24:58.873 --rc geninfo_all_blocks=1 00:24:58.873 --rc geninfo_unexecuted_blocks=1 00:24:58.873 00:24:58.873 ' 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:58.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.873 --rc genhtml_branch_coverage=1 00:24:58.873 --rc genhtml_function_coverage=1 00:24:58.873 --rc genhtml_legend=1 00:24:58.873 --rc geninfo_all_blocks=1 00:24:58.873 --rc geninfo_unexecuted_blocks=1 00:24:58.873 00:24:58.873 ' 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:58.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.873 --rc genhtml_branch_coverage=1 00:24:58.873 --rc genhtml_function_coverage=1 00:24:58.873 --rc genhtml_legend=1 00:24:58.873 --rc geninfo_all_blocks=1 00:24:58.873 --rc geninfo_unexecuted_blocks=1 00:24:58.873 00:24:58.873 ' 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.873 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:59.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.134 ************************************ 00:24:59.134 START TEST nvmf_multicontroller 00:24:59.134 ************************************ 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:59.134 * Looking for test storage... 00:24:59.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.134 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:59.395 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:59.395 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:59.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.396 --rc genhtml_branch_coverage=1 00:24:59.396 --rc genhtml_function_coverage=1 00:24:59.396 --rc genhtml_legend=1 00:24:59.396 --rc geninfo_all_blocks=1 00:24:59.396 --rc geninfo_unexecuted_blocks=1 00:24:59.396 00:24:59.396 ' 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:59.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.396 --rc genhtml_branch_coverage=1 00:24:59.396 --rc genhtml_function_coverage=1 00:24:59.396 --rc genhtml_legend=1 00:24:59.396 --rc geninfo_all_blocks=1 00:24:59.396 --rc geninfo_unexecuted_blocks=1 00:24:59.396 00:24:59.396 ' 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:59.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.396 --rc genhtml_branch_coverage=1 00:24:59.396 --rc genhtml_function_coverage=1 00:24:59.396 --rc genhtml_legend=1 00:24:59.396 --rc geninfo_all_blocks=1 00:24:59.396 --rc geninfo_unexecuted_blocks=1 00:24:59.396 00:24:59.396 ' 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:59.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.396 --rc genhtml_branch_coverage=1 00:24:59.396 --rc genhtml_function_coverage=1 00:24:59.396 --rc genhtml_legend=1 00:24:59.396 --rc geninfo_all_blocks=1 00:24:59.396 --rc geninfo_unexecuted_blocks=1 00:24:59.396 00:24:59.396 ' 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:59.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:59.396 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:59.397 09:57:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:07.540 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:07.540 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:07.540 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:07.540 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:07.540 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.541 09:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:07.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:25:07.541 00:25:07.541 --- 10.0.0.2 ping statistics --- 00:25:07.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.541 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:25:07.541 00:25:07.541 --- 10.0.0.1 ping statistics --- 00:25:07.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.541 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3962423 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3962423 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3962423 ']' 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.541 09:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.541 [2024-11-27 09:57:22.265702] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:25:07.541 [2024-11-27 09:57:22.265772] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.541 [2024-11-27 09:57:22.368184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:07.541 [2024-11-27 09:57:22.420176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.541 [2024-11-27 09:57:22.420230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.541 [2024-11-27 09:57:22.420239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.541 [2024-11-27 09:57:22.420247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.541 [2024-11-27 09:57:22.420253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.541 [2024-11-27 09:57:22.422077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.541 [2024-11-27 09:57:22.422232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.541 [2024-11-27 09:57:22.422232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.803 [2024-11-27 09:57:23.145489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.803 Malloc0 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.803 [2024-11-27 09:57:23.216765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.803 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.803 [2024-11-27 09:57:23.228615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:07.804 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.804 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:07.804 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.804 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.804 Malloc1 00:25:07.804 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.804 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:07.804 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.804 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3962482 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3962482 /var/tmp/bdevperf.sock 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3962482 ']' 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.065 09:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.007 NVMe0n1 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.007 1 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:09.007 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.008 request: 00:25:09.008 { 00:25:09.008 "name": "NVMe0", 00:25:09.008 "trtype": "tcp", 00:25:09.008 "traddr": "10.0.0.2", 00:25:09.008 "adrfam": "ipv4", 00:25:09.008 "trsvcid": "4420", 00:25:09.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:09.008 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:09.008 "hostaddr": "10.0.0.1", 00:25:09.008 "prchk_reftag": false, 00:25:09.008 "prchk_guard": false, 00:25:09.008 "hdgst": false, 00:25:09.008 "ddgst": false, 00:25:09.008 "allow_unrecognized_csi": false, 00:25:09.008 "method": "bdev_nvme_attach_controller", 00:25:09.008 "req_id": 1 00:25:09.008 } 00:25:09.008 Got JSON-RPC error response 00:25:09.008 response: 00:25:09.008 { 00:25:09.008 "code": -114, 00:25:09.008 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:09.008 } 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.008 request: 00:25:09.008 { 00:25:09.008 "name": "NVMe0", 00:25:09.008 "trtype": "tcp", 00:25:09.008 "traddr": "10.0.0.2", 00:25:09.008 "adrfam": "ipv4", 00:25:09.008 "trsvcid": "4420", 00:25:09.008 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:09.008 "hostaddr": "10.0.0.1", 00:25:09.008 "prchk_reftag": false, 00:25:09.008 "prchk_guard": false, 00:25:09.008 "hdgst": false, 00:25:09.008 "ddgst": false, 00:25:09.008 "allow_unrecognized_csi": false, 00:25:09.008 "method": "bdev_nvme_attach_controller", 00:25:09.008 "req_id": 1 00:25:09.008 } 00:25:09.008 Got JSON-RPC error response 00:25:09.008 response: 00:25:09.008 { 00:25:09.008 "code": -114, 00:25:09.008 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:09.008 } 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.008 request: 00:25:09.008 { 00:25:09.008 "name": "NVMe0", 00:25:09.008 "trtype": "tcp", 00:25:09.008 "traddr": "10.0.0.2", 00:25:09.008 "adrfam": "ipv4", 00:25:09.008 "trsvcid": "4420", 00:25:09.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:09.008 "hostaddr": "10.0.0.1", 00:25:09.008 "prchk_reftag": false, 00:25:09.008 "prchk_guard": false, 00:25:09.008 "hdgst": false, 00:25:09.008 "ddgst": false, 00:25:09.008 "multipath": "disable", 00:25:09.008 "allow_unrecognized_csi": false, 00:25:09.008 "method": "bdev_nvme_attach_controller", 00:25:09.008 "req_id": 1 00:25:09.008 } 00:25:09.008 Got JSON-RPC error response 00:25:09.008 response: 00:25:09.008 { 00:25:09.008 "code": -114, 00:25:09.008 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:25:09.008 } 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.008 request: 00:25:09.008 { 00:25:09.008 "name": "NVMe0", 00:25:09.008 "trtype": "tcp", 00:25:09.008 "traddr": "10.0.0.2", 00:25:09.008 "adrfam": "ipv4", 00:25:09.008 "trsvcid": "4420", 00:25:09.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:09.008 "hostaddr": "10.0.0.1", 00:25:09.008 "prchk_reftag": false, 00:25:09.008 "prchk_guard": false, 00:25:09.008 "hdgst": false, 00:25:09.008 "ddgst": false, 00:25:09.008 "multipath": "failover", 00:25:09.008 "allow_unrecognized_csi": false, 00:25:09.008 "method": "bdev_nvme_attach_controller", 00:25:09.008 "req_id": 1 00:25:09.008 } 00:25:09.008 Got JSON-RPC error response 00:25:09.008 response: 00:25:09.008 { 00:25:09.008 "code": -114, 00:25:09.008 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:09.008 } 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.008 NVMe0n1 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.008 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.009 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.009 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:09.009 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.009 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.269 00:25:09.269 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.269 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:09.269 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:09.269 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.269 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.269 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.269 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:09.269 09:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:10.654 { 00:25:10.654 "results": [ 00:25:10.654 { 00:25:10.654 "job": "NVMe0n1", 00:25:10.654 "core_mask": "0x1", 00:25:10.654 "workload": "write", 00:25:10.654 "status": "finished", 00:25:10.654 "queue_depth": 128, 00:25:10.654 "io_size": 4096, 00:25:10.654 "runtime": 1.005557, 00:25:10.654 "iops": 23894.219820457718, 00:25:10.654 "mibps": 93.33679617366296, 00:25:10.654 "io_failed": 0, 00:25:10.654 "io_timeout": 0, 00:25:10.654 "avg_latency_us": 5345.163508830344, 00:25:10.654 "min_latency_us": 2362.0266666666666, 00:25:10.654 "max_latency_us": 13325.653333333334 00:25:10.654 } 00:25:10.654 ], 00:25:10.654 "core_count": 1 00:25:10.654 } 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3962482 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3962482 ']' 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3962482 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3962482 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3962482' 00:25:10.654 killing process with pid 3962482 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3962482 00:25:10.654 09:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3962482 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:25:10.654 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:25:10.654 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:10.654 [2024-11-27 09:57:23.358869] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:25:10.654 [2024-11-27 09:57:23.358947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3962482 ] 00:25:10.654 [2024-11-27 09:57:23.453733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.654 [2024-11-27 09:57:23.507789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.655 [2024-11-27 09:57:24.655366] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name b7b7b4b9-9186-44ee-899c-54762cd48375 already exists 00:25:10.655 [2024-11-27 09:57:24.655413] bdev.c:7832:bdev_register: *ERROR*: Unable to add uuid:b7b7b4b9-9186-44ee-899c-54762cd48375 alias for bdev NVMe1n1 00:25:10.655 [2024-11-27 09:57:24.655423] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:10.655 Running I/O for 1 seconds... 00:25:10.655 23836.00 IOPS, 93.11 MiB/s 00:25:10.655 Latency(us) 00:25:10.655 [2024-11-27T08:57:26.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.655 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:10.655 NVMe0n1 : 1.01 23894.22 93.34 0.00 0.00 5345.16 2362.03 13325.65 00:25:10.655 [2024-11-27T08:57:26.121Z] =================================================================================================================== 00:25:10.655 [2024-11-27T08:57:26.121Z] Total : 23894.22 93.34 0.00 0.00 5345.16 2362.03 13325.65 00:25:10.655 Received shutdown signal, test time was about 1.000000 seconds 00:25:10.655 00:25:10.655 Latency(us) 00:25:10.655 [2024-11-27T08:57:26.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.655 [2024-11-27T08:57:26.121Z] =================================================================================================================== 00:25:10.655 [2024-11-27T08:57:26.121Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.655 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:10.655 rmmod nvme_tcp 00:25:10.655 rmmod nvme_fabrics 00:25:10.655 rmmod nvme_keyring 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3962423 ']' 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3962423 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3962423 ']' 00:25:10.655 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3962423 00:25:10.963 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3962423 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3962423' 00:25:10.964 killing process with pid 3962423 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3962423 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3962423 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.964 09:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.509 00:25:13.509 real 0m14.003s 00:25:13.509 user 0m16.856s 00:25:13.509 sys 0m6.622s 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:13.509 ************************************ 00:25:13.509 END TEST nvmf_multicontroller 00:25:13.509 ************************************ 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.509 ************************************ 00:25:13.509 START TEST nvmf_aer 00:25:13.509 ************************************ 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:13.509 * Looking for test storage... 00:25:13.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:13.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.509 --rc genhtml_branch_coverage=1 00:25:13.509 --rc genhtml_function_coverage=1 00:25:13.509 --rc genhtml_legend=1 00:25:13.509 --rc geninfo_all_blocks=1 00:25:13.509 --rc geninfo_unexecuted_blocks=1 00:25:13.509 00:25:13.509 ' 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:13.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.509 --rc genhtml_branch_coverage=1 00:25:13.509 --rc genhtml_function_coverage=1 00:25:13.509 --rc genhtml_legend=1 00:25:13.509 --rc geninfo_all_blocks=1 00:25:13.509 --rc geninfo_unexecuted_blocks=1 00:25:13.509 00:25:13.509 ' 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:13.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.509 --rc genhtml_branch_coverage=1 00:25:13.509 --rc genhtml_function_coverage=1 00:25:13.509 --rc genhtml_legend=1 00:25:13.509 --rc geninfo_all_blocks=1 00:25:13.509 --rc geninfo_unexecuted_blocks=1 00:25:13.509 00:25:13.509 ' 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:13.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.509 --rc genhtml_branch_coverage=1 00:25:13.509 --rc genhtml_function_coverage=1 00:25:13.509 --rc genhtml_legend=1 00:25:13.509 --rc geninfo_all_blocks=1 00:25:13.509 --rc geninfo_unexecuted_blocks=1 00:25:13.509 00:25:13.509 ' 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.509 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:13.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:13.510 09:57:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:21.653 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:21.653 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:21.653 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:21.653 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.653 09:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.653 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.653 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:21.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:25:21.654 00:25:21.654 --- 10.0.0.2 ping statistics --- 00:25:21.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.654 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:25:21.654 00:25:21.654 --- 10.0.0.1 ping statistics --- 00:25:21.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.654 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3967295 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3967295 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3967295 ']' 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.654 09:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.654 [2024-11-27 09:57:36.286649] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:25:21.654 [2024-11-27 09:57:36.286718] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.654 [2024-11-27 09:57:36.388474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.654 [2024-11-27 09:57:36.441785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.654 [2024-11-27 09:57:36.441840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.654 [2024-11-27 09:57:36.441849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.654 [2024-11-27 09:57:36.441856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.654 [2024-11-27 09:57:36.441863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.654 [2024-11-27 09:57:36.444296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.654 [2024-11-27 09:57:36.444456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.654 [2024-11-27 09:57:36.444620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.654 [2024-11-27 09:57:36.444620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.654 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.654 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:25:21.654 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.654 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.915 [2024-11-27 09:57:37.168878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.915 Malloc0 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.915 [2024-11-27 09:57:37.242212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.915 [ 00:25:21.915 { 00:25:21.915 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:21.915 "subtype": "Discovery", 00:25:21.915 "listen_addresses": [], 00:25:21.915 "allow_any_host": true, 00:25:21.915 "hosts": [] 00:25:21.915 }, 00:25:21.915 { 00:25:21.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.915 "subtype": "NVMe", 00:25:21.915 "listen_addresses": [ 00:25:21.915 { 00:25:21.915 "trtype": "TCP", 00:25:21.915 "adrfam": "IPv4", 00:25:21.915 "traddr": "10.0.0.2", 00:25:21.915 "trsvcid": "4420" 00:25:21.915 } 00:25:21.915 ], 00:25:21.915 "allow_any_host": true, 00:25:21.915 "hosts": [], 00:25:21.915 "serial_number": "SPDK00000000000001", 00:25:21.915 "model_number": "SPDK bdev Controller", 00:25:21.915 "max_namespaces": 2, 00:25:21.915 "min_cntlid": 1, 00:25:21.915 "max_cntlid": 65519, 00:25:21.915 "namespaces": [ 00:25:21.915 { 00:25:21.915 "nsid": 1, 00:25:21.915 "bdev_name": "Malloc0", 00:25:21.915 "name": "Malloc0", 00:25:21.915 "nguid": "19E3890EE5614ED3A7A8BFEBC9CECE9F", 00:25:21.915 "uuid": "19e3890e-e561-4ed3-a7a8-bfebc9cece9f" 00:25:21.915 } 00:25:21.915 ] 00:25:21.915 } 00:25:21.915 ] 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3967509 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:25:21.915 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:22.177 Malloc1 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.177 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:22.438 Asynchronous Event Request test 00:25:22.438 Attaching to 10.0.0.2 00:25:22.438 Attached to 10.0.0.2 00:25:22.438 Registering asynchronous event callbacks... 00:25:22.438 Starting namespace attribute notice tests for all controllers... 00:25:22.438 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:22.438 aer_cb - Changed Namespace 00:25:22.438 Cleaning up... 00:25:22.438 [ 00:25:22.438 { 00:25:22.438 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:22.438 "subtype": "Discovery", 00:25:22.438 "listen_addresses": [], 00:25:22.438 "allow_any_host": true, 00:25:22.438 "hosts": [] 00:25:22.438 }, 00:25:22.438 { 00:25:22.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:22.438 "subtype": "NVMe", 00:25:22.438 "listen_addresses": [ 00:25:22.438 { 00:25:22.438 "trtype": "TCP", 00:25:22.438 "adrfam": "IPv4", 00:25:22.438 "traddr": "10.0.0.2", 00:25:22.438 "trsvcid": "4420" 00:25:22.438 } 00:25:22.438 ], 00:25:22.438 "allow_any_host": true, 00:25:22.438 "hosts": [], 00:25:22.438 "serial_number": "SPDK00000000000001", 00:25:22.438 "model_number": "SPDK bdev Controller", 00:25:22.438 "max_namespaces": 2, 00:25:22.438 "min_cntlid": 1, 00:25:22.438 "max_cntlid": 65519, 00:25:22.438 "namespaces": [ 00:25:22.438 { 00:25:22.438 "nsid": 1, 00:25:22.438 "bdev_name": "Malloc0", 00:25:22.438 "name": "Malloc0", 00:25:22.438 "nguid": "19E3890EE5614ED3A7A8BFEBC9CECE9F", 00:25:22.438 "uuid": "19e3890e-e561-4ed3-a7a8-bfebc9cece9f" 00:25:22.438 }, 00:25:22.438 { 00:25:22.438 "nsid": 2, 00:25:22.438 "bdev_name": "Malloc1", 00:25:22.438 "name": "Malloc1", 00:25:22.438 "nguid": "77137F1F35D44AC3AD7E8AFC8F1493BF", 00:25:22.438 "uuid": "77137f1f-35d4-4ac3-ad7e-8afc8f1493bf" 00:25:22.438 } 00:25:22.438 ] 00:25:22.438 } 00:25:22.438 ] 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3967509 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:22.438 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:22.438 rmmod nvme_tcp 00:25:22.438 rmmod nvme_fabrics 00:25:22.438 rmmod nvme_keyring 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3967295 ']' 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3967295 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3967295 ']' 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3967295 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3967295 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3967295' 00:25:22.439 killing process with pid 3967295 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3967295 00:25:22.439 09:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3967295 00:25:22.699 09:57:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:22.699 09:57:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:22.699 09:57:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:22.699 09:57:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:25:22.699 09:57:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:25:22.699 09:57:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:22.699 09:57:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:25:22.699 09:57:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:22.699 09:57:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:22.699 09:57:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.699 09:57:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.699 09:57:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:25.248 00:25:25.248 real 0m11.648s 00:25:25.248 user 0m8.702s 00:25:25.248 sys 0m6.099s 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:25.248 ************************************ 00:25:25.248 END TEST nvmf_aer 00:25:25.248 ************************************ 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.248 ************************************ 00:25:25.248 START TEST nvmf_async_init 00:25:25.248 ************************************ 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:25.248 * Looking for test storage... 00:25:25.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:25.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.248 --rc genhtml_branch_coverage=1 00:25:25.248 --rc genhtml_function_coverage=1 00:25:25.248 --rc genhtml_legend=1 00:25:25.248 --rc geninfo_all_blocks=1 00:25:25.248 --rc geninfo_unexecuted_blocks=1 00:25:25.248 00:25:25.248 ' 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:25.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.248 --rc genhtml_branch_coverage=1 00:25:25.248 --rc genhtml_function_coverage=1 00:25:25.248 --rc genhtml_legend=1 00:25:25.248 --rc geninfo_all_blocks=1 00:25:25.248 --rc geninfo_unexecuted_blocks=1 00:25:25.248 00:25:25.248 ' 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:25.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.248 --rc genhtml_branch_coverage=1 00:25:25.248 --rc genhtml_function_coverage=1 00:25:25.248 --rc genhtml_legend=1 00:25:25.248 --rc geninfo_all_blocks=1 00:25:25.248 --rc geninfo_unexecuted_blocks=1 00:25:25.248 00:25:25.248 ' 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:25.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.248 --rc genhtml_branch_coverage=1 00:25:25.248 --rc genhtml_function_coverage=1 00:25:25.248 --rc genhtml_legend=1 00:25:25.248 --rc geninfo_all_blocks=1 00:25:25.248 --rc geninfo_unexecuted_blocks=1 00:25:25.248 00:25:25.248 ' 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.248 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:25.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=23fbee9e5c194c59a97658331f6e482c 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:25.249 09:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:33.392 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:33.392 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:33.392 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.392 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:33.393 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:33.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:25:33.393 00:25:33.393 --- 10.0.0.2 ping statistics --- 00:25:33.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.393 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:25:33.393 00:25:33.393 --- 10.0.0.1 ping statistics --- 00:25:33.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.393 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:33.393 09:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.393 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3971832 00:25:33.393 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3971832 00:25:33.393 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:33.393 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3971832 ']' 00:25:33.393 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.393 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.393 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.393 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.393 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.393 [2024-11-27 09:57:48.073426] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:25:33.393 [2024-11-27 09:57:48.073493] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.393 [2024-11-27 09:57:48.174423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.393 [2024-11-27 09:57:48.225109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.393 [2024-11-27 09:57:48.225172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.393 [2024-11-27 09:57:48.225182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.393 [2024-11-27 09:57:48.225189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.393 [2024-11-27 09:57:48.225195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.393 [2024-11-27 09:57:48.225938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.655 [2024-11-27 09:57:48.944254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.655 null0 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 23fbee9e5c194c59a97658331f6e482c 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.655 09:57:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.655 [2024-11-27 09:57:49.004643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.655 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.656 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:33.656 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.656 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.916 nvme0n1 00:25:33.916 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.916 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:33.916 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.916 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.916 [ 00:25:33.916 { 00:25:33.916 "name": "nvme0n1", 00:25:33.916 "aliases": [ 00:25:33.916 "23fbee9e-5c19-4c59-a976-58331f6e482c" 00:25:33.916 ], 00:25:33.916 "product_name": "NVMe disk", 00:25:33.916 "block_size": 512, 00:25:33.916 "num_blocks": 2097152, 00:25:33.916 "uuid": "23fbee9e-5c19-4c59-a976-58331f6e482c", 00:25:33.916 "numa_id": 0, 00:25:33.916 "assigned_rate_limits": { 00:25:33.916 "rw_ios_per_sec": 0, 00:25:33.916 "rw_mbytes_per_sec": 0, 00:25:33.916 "r_mbytes_per_sec": 0, 00:25:33.916 "w_mbytes_per_sec": 0 00:25:33.916 }, 00:25:33.916 "claimed": false, 00:25:33.916 "zoned": false, 00:25:33.916 "supported_io_types": { 00:25:33.916 "read": true, 00:25:33.916 "write": true, 00:25:33.916 "unmap": false, 00:25:33.916 "flush": true, 00:25:33.916 "reset": true, 00:25:33.916 "nvme_admin": true, 00:25:33.916 "nvme_io": true, 00:25:33.916 "nvme_io_md": false, 00:25:33.916 "write_zeroes": true, 00:25:33.916 "zcopy": false, 00:25:33.916 "get_zone_info": false, 00:25:33.916 "zone_management": false, 00:25:33.916 "zone_append": false, 00:25:33.916 "compare": true, 00:25:33.916 "compare_and_write": true, 00:25:33.916 "abort": true, 00:25:33.916 "seek_hole": false, 00:25:33.916 "seek_data": false, 00:25:33.916 "copy": true, 00:25:33.916 "nvme_iov_md": false 00:25:33.916 }, 00:25:33.916 "memory_domains": [ 00:25:33.916 { 00:25:33.916 "dma_device_id": "system", 00:25:33.916 "dma_device_type": 1 00:25:33.916 } 00:25:33.916 ], 00:25:33.916 "driver_specific": { 00:25:33.916 "nvme": [ 00:25:33.916 { 00:25:33.916 "trid": { 00:25:33.916 "trtype": "TCP", 00:25:33.916 "adrfam": "IPv4", 00:25:33.916 "traddr": "10.0.0.2", 00:25:33.916 "trsvcid": "4420", 00:25:33.916 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:33.916 }, 00:25:33.916 "ctrlr_data": { 00:25:33.916 "cntlid": 1, 00:25:33.916 "vendor_id": "0x8086", 00:25:33.916 "model_number": "SPDK bdev Controller", 00:25:33.916 "serial_number": "00000000000000000000", 00:25:33.916 "firmware_revision": "25.01", 00:25:33.916 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:33.916 "oacs": { 00:25:33.916 "security": 0, 00:25:33.916 "format": 0, 00:25:33.916 "firmware": 0, 00:25:33.916 "ns_manage": 0 00:25:33.916 }, 00:25:33.916 "multi_ctrlr": true, 00:25:33.916 "ana_reporting": false 00:25:33.916 }, 00:25:33.916 "vs": { 00:25:33.916 "nvme_version": "1.3" 00:25:33.916 }, 00:25:33.916 "ns_data": { 00:25:33.916 "id": 1, 00:25:33.916 "can_share": true 00:25:33.916 } 00:25:33.916 } 00:25:33.916 ], 00:25:33.916 "mp_policy": "active_passive" 00:25:33.916 } 00:25:33.916 } 00:25:33.916 ] 00:25:33.916 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.916 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:33.916 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.916 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.916 [2024-11-27 09:57:49.279390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:33.916 [2024-11-27 09:57:49.279476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a2ce0 (9): Bad file descriptor 00:25:34.177 [2024-11-27 09:57:49.411266] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.177 [ 00:25:34.177 { 00:25:34.177 "name": "nvme0n1", 00:25:34.177 "aliases": [ 00:25:34.177 "23fbee9e-5c19-4c59-a976-58331f6e482c" 00:25:34.177 ], 00:25:34.177 "product_name": "NVMe disk", 00:25:34.177 "block_size": 512, 00:25:34.177 "num_blocks": 2097152, 00:25:34.177 "uuid": "23fbee9e-5c19-4c59-a976-58331f6e482c", 00:25:34.177 "numa_id": 0, 00:25:34.177 "assigned_rate_limits": { 00:25:34.177 "rw_ios_per_sec": 0, 00:25:34.177 "rw_mbytes_per_sec": 0, 00:25:34.177 "r_mbytes_per_sec": 0, 00:25:34.177 "w_mbytes_per_sec": 0 00:25:34.177 }, 00:25:34.177 "claimed": false, 00:25:34.177 "zoned": false, 00:25:34.177 "supported_io_types": { 00:25:34.177 "read": true, 00:25:34.177 "write": true, 00:25:34.177 "unmap": false, 00:25:34.177 "flush": true, 00:25:34.177 "reset": true, 00:25:34.177 "nvme_admin": true, 00:25:34.177 "nvme_io": true, 00:25:34.177 "nvme_io_md": false, 00:25:34.177 "write_zeroes": true, 00:25:34.177 "zcopy": false, 00:25:34.177 "get_zone_info": false, 00:25:34.177 "zone_management": false, 00:25:34.177 "zone_append": false, 00:25:34.177 "compare": true, 00:25:34.177 "compare_and_write": true, 00:25:34.177 "abort": true, 00:25:34.177 "seek_hole": false, 00:25:34.177 "seek_data": false, 00:25:34.177 "copy": true, 00:25:34.177 "nvme_iov_md": false 00:25:34.177 }, 00:25:34.177 "memory_domains": [ 00:25:34.177 { 00:25:34.177 "dma_device_id": "system", 00:25:34.177 "dma_device_type": 1 00:25:34.177 } 00:25:34.177 ], 00:25:34.177 "driver_specific": { 00:25:34.177 "nvme": [ 00:25:34.177 { 00:25:34.177 "trid": { 00:25:34.177 "trtype": "TCP", 00:25:34.177 "adrfam": "IPv4", 00:25:34.177 "traddr": "10.0.0.2", 00:25:34.177 "trsvcid": "4420", 00:25:34.177 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:34.177 }, 00:25:34.177 "ctrlr_data": { 00:25:34.177 "cntlid": 2, 00:25:34.177 "vendor_id": "0x8086", 00:25:34.177 "model_number": "SPDK bdev Controller", 00:25:34.177 "serial_number": "00000000000000000000", 00:25:34.177 "firmware_revision": "25.01", 00:25:34.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:34.177 "oacs": { 00:25:34.177 "security": 0, 00:25:34.177 "format": 0, 00:25:34.177 "firmware": 0, 00:25:34.177 "ns_manage": 0 00:25:34.177 }, 00:25:34.177 "multi_ctrlr": true, 00:25:34.177 "ana_reporting": false 00:25:34.177 }, 00:25:34.177 "vs": { 00:25:34.177 "nvme_version": "1.3" 00:25:34.177 }, 00:25:34.177 "ns_data": { 00:25:34.177 "id": 1, 00:25:34.177 "can_share": true 00:25:34.177 } 00:25:34.177 } 00:25:34.177 ], 00:25:34.177 "mp_policy": "active_passive" 00:25:34.177 } 00:25:34.177 } 00:25:34.177 ] 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.n1S2hVuepX 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.n1S2hVuepX 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.n1S2hVuepX 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.177 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.177 [2024-11-27 09:57:49.500058] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:34.177 [2024-11-27 09:57:49.500229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.178 [2024-11-27 09:57:49.524140] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:34.178 nvme0n1 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.178 [ 00:25:34.178 { 00:25:34.178 "name": "nvme0n1", 00:25:34.178 "aliases": [ 00:25:34.178 "23fbee9e-5c19-4c59-a976-58331f6e482c" 00:25:34.178 ], 00:25:34.178 "product_name": "NVMe disk", 00:25:34.178 "block_size": 512, 00:25:34.178 "num_blocks": 2097152, 00:25:34.178 "uuid": "23fbee9e-5c19-4c59-a976-58331f6e482c", 00:25:34.178 "numa_id": 0, 00:25:34.178 "assigned_rate_limits": { 00:25:34.178 "rw_ios_per_sec": 0, 00:25:34.178 "rw_mbytes_per_sec": 0, 00:25:34.178 "r_mbytes_per_sec": 0, 00:25:34.178 "w_mbytes_per_sec": 0 00:25:34.178 }, 00:25:34.178 "claimed": false, 00:25:34.178 "zoned": false, 00:25:34.178 "supported_io_types": { 00:25:34.178 "read": true, 00:25:34.178 "write": true, 00:25:34.178 "unmap": false, 00:25:34.178 "flush": true, 00:25:34.178 "reset": true, 00:25:34.178 "nvme_admin": true, 00:25:34.178 "nvme_io": true, 00:25:34.178 "nvme_io_md": false, 00:25:34.178 "write_zeroes": true, 00:25:34.178 "zcopy": false, 00:25:34.178 "get_zone_info": false, 00:25:34.178 "zone_management": false, 00:25:34.178 "zone_append": false, 00:25:34.178 "compare": true, 00:25:34.178 "compare_and_write": true, 00:25:34.178 "abort": true, 00:25:34.178 "seek_hole": false, 00:25:34.178 "seek_data": false, 00:25:34.178 "copy": true, 00:25:34.178 "nvme_iov_md": false 00:25:34.178 }, 00:25:34.178 "memory_domains": [ 00:25:34.178 { 00:25:34.178 "dma_device_id": "system", 00:25:34.178 "dma_device_type": 1 00:25:34.178 } 00:25:34.178 ], 00:25:34.178 "driver_specific": { 00:25:34.178 "nvme": [ 00:25:34.178 { 00:25:34.178 "trid": { 00:25:34.178 "trtype": "TCP", 00:25:34.178 "adrfam": "IPv4", 00:25:34.178 "traddr": "10.0.0.2", 00:25:34.178 "trsvcid": "4421", 00:25:34.178 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:34.178 }, 00:25:34.178 "ctrlr_data": { 00:25:34.178 "cntlid": 3, 00:25:34.178 "vendor_id": "0x8086", 00:25:34.178 "model_number": "SPDK bdev Controller", 00:25:34.178 "serial_number": "00000000000000000000", 00:25:34.178 "firmware_revision": "25.01", 00:25:34.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:34.178 "oacs": { 00:25:34.178 "security": 0, 00:25:34.178 "format": 0, 00:25:34.178 "firmware": 0, 00:25:34.178 "ns_manage": 0 00:25:34.178 }, 00:25:34.178 "multi_ctrlr": true, 00:25:34.178 "ana_reporting": false 00:25:34.178 }, 00:25:34.178 "vs": { 00:25:34.178 "nvme_version": "1.3" 00:25:34.178 }, 00:25:34.178 "ns_data": { 00:25:34.178 "id": 1, 00:25:34.178 "can_share": true 00:25:34.178 } 00:25:34.178 } 00:25:34.178 ], 00:25:34.178 "mp_policy": "active_passive" 00:25:34.178 } 00:25:34.178 } 00:25:34.178 ] 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.178 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.n1S2hVuepX 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:34.439 rmmod nvme_tcp 00:25:34.439 rmmod nvme_fabrics 00:25:34.439 rmmod nvme_keyring 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3971832 ']' 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3971832 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3971832 ']' 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3971832 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3971832 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3971832' 00:25:34.439 killing process with pid 3971832 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3971832 00:25:34.439 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3971832 00:25:34.701 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:34.701 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:34.701 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:34.701 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:34.701 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:25:34.701 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:34.701 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:25:34.701 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:34.701 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:34.701 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.701 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.701 09:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.616 09:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:36.616 00:25:36.616 real 0m11.805s 00:25:36.616 user 0m4.245s 00:25:36.616 sys 0m6.153s 00:25:36.616 09:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:36.616 09:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:36.616 ************************************ 00:25:36.616 END TEST nvmf_async_init 00:25:36.616 ************************************ 00:25:36.616 09:57:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:36.616 09:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:36.616 09:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:36.616 09:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.877 ************************************ 00:25:36.877 START TEST dma 00:25:36.877 ************************************ 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:36.877 * Looking for test storage... 00:25:36.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:36.877 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:36.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.878 --rc genhtml_branch_coverage=1 00:25:36.878 --rc genhtml_function_coverage=1 00:25:36.878 --rc genhtml_legend=1 00:25:36.878 --rc geninfo_all_blocks=1 00:25:36.878 --rc geninfo_unexecuted_blocks=1 00:25:36.878 00:25:36.878 ' 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:36.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.878 --rc genhtml_branch_coverage=1 00:25:36.878 --rc genhtml_function_coverage=1 00:25:36.878 --rc genhtml_legend=1 00:25:36.878 --rc geninfo_all_blocks=1 00:25:36.878 --rc geninfo_unexecuted_blocks=1 00:25:36.878 00:25:36.878 ' 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:36.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.878 --rc genhtml_branch_coverage=1 00:25:36.878 --rc genhtml_function_coverage=1 00:25:36.878 --rc genhtml_legend=1 00:25:36.878 --rc geninfo_all_blocks=1 00:25:36.878 --rc geninfo_unexecuted_blocks=1 00:25:36.878 00:25:36.878 ' 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:36.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.878 --rc genhtml_branch_coverage=1 00:25:36.878 --rc genhtml_function_coverage=1 00:25:36.878 --rc genhtml_legend=1 00:25:36.878 --rc geninfo_all_blocks=1 00:25:36.878 --rc geninfo_unexecuted_blocks=1 00:25:36.878 00:25:36.878 ' 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:36.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:36.878 09:57:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:36.878 00:25:36.878 real 0m0.243s 00:25:36.878 user 0m0.135s 00:25:36.878 sys 0m0.123s 00:25:37.141 09:57:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:37.141 09:57:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:37.141 ************************************ 00:25:37.141 END TEST dma 00:25:37.141 ************************************ 00:25:37.141 09:57:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:37.141 09:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:37.141 09:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:37.141 09:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.141 ************************************ 00:25:37.141 START TEST nvmf_identify 00:25:37.141 ************************************ 00:25:37.141 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:37.141 * Looking for test storage... 00:25:37.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:37.141 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:37.141 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:25:37.141 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:37.402 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:37.402 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:37.402 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:37.402 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:37.402 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:37.402 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:37.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.403 --rc genhtml_branch_coverage=1 00:25:37.403 --rc genhtml_function_coverage=1 00:25:37.403 --rc genhtml_legend=1 00:25:37.403 --rc geninfo_all_blocks=1 00:25:37.403 --rc geninfo_unexecuted_blocks=1 00:25:37.403 00:25:37.403 ' 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:37.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.403 --rc genhtml_branch_coverage=1 00:25:37.403 --rc genhtml_function_coverage=1 00:25:37.403 --rc genhtml_legend=1 00:25:37.403 --rc geninfo_all_blocks=1 00:25:37.403 --rc geninfo_unexecuted_blocks=1 00:25:37.403 00:25:37.403 ' 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:37.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.403 --rc genhtml_branch_coverage=1 00:25:37.403 --rc genhtml_function_coverage=1 00:25:37.403 --rc genhtml_legend=1 00:25:37.403 --rc geninfo_all_blocks=1 00:25:37.403 --rc geninfo_unexecuted_blocks=1 00:25:37.403 00:25:37.403 ' 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:37.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.403 --rc genhtml_branch_coverage=1 00:25:37.403 --rc genhtml_function_coverage=1 00:25:37.403 --rc genhtml_legend=1 00:25:37.403 --rc geninfo_all_blocks=1 00:25:37.403 --rc geninfo_unexecuted_blocks=1 00:25:37.403 00:25:37.403 ' 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:37.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:37.403 09:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:45.645 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:45.646 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:45.646 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:45.646 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:45.646 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:45.646 09:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:45.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:25:45.646 00:25:45.646 --- 10.0.0.2 ping statistics --- 00:25:45.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.646 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:25:45.646 00:25:45.646 --- 10.0.0.1 ping statistics --- 00:25:45.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.646 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3976566 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3976566 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3976566 ']' 00:25:45.646 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.647 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.647 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.647 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.647 09:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:45.647 [2024-11-27 09:58:00.238273] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:25:45.647 [2024-11-27 09:58:00.238339] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.647 [2024-11-27 09:58:00.338872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:45.647 [2024-11-27 09:58:00.393951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.647 [2024-11-27 09:58:00.394006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.647 [2024-11-27 09:58:00.394015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.647 [2024-11-27 09:58:00.394022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.647 [2024-11-27 09:58:00.394029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.647 [2024-11-27 09:58:00.396443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.647 [2024-11-27 09:58:00.396672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.647 [2024-11-27 09:58:00.396834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:45.647 [2024-11-27 09:58:00.396835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.647 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.647 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:45.647 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:45.647 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.647 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:45.647 [2024-11-27 09:58:01.076490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.647 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.647 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:45.647 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:45.647 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:45.909 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:45.909 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.909 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:45.909 Malloc0 00:25:45.909 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.909 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:45.909 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.909 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:45.909 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:45.910 [2024-11-27 09:58:01.206756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:45.910 [ 00:25:45.910 { 00:25:45.910 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:45.910 "subtype": "Discovery", 00:25:45.910 "listen_addresses": [ 00:25:45.910 { 00:25:45.910 "trtype": "TCP", 00:25:45.910 "adrfam": "IPv4", 00:25:45.910 "traddr": "10.0.0.2", 00:25:45.910 "trsvcid": "4420" 00:25:45.910 } 00:25:45.910 ], 00:25:45.910 "allow_any_host": true, 00:25:45.910 "hosts": [] 00:25:45.910 }, 00:25:45.910 { 00:25:45.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.910 "subtype": "NVMe", 00:25:45.910 "listen_addresses": [ 00:25:45.910 { 00:25:45.910 "trtype": "TCP", 00:25:45.910 "adrfam": "IPv4", 00:25:45.910 "traddr": "10.0.0.2", 00:25:45.910 "trsvcid": "4420" 00:25:45.910 } 00:25:45.910 ], 00:25:45.910 "allow_any_host": true, 00:25:45.910 "hosts": [], 00:25:45.910 "serial_number": "SPDK00000000000001", 00:25:45.910 "model_number": "SPDK bdev Controller", 00:25:45.910 "max_namespaces": 32, 00:25:45.910 "min_cntlid": 1, 00:25:45.910 "max_cntlid": 65519, 00:25:45.910 "namespaces": [ 00:25:45.910 { 00:25:45.910 "nsid": 1, 00:25:45.910 "bdev_name": "Malloc0", 00:25:45.910 "name": "Malloc0", 00:25:45.910 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:45.910 "eui64": "ABCDEF0123456789", 00:25:45.910 "uuid": "e73b1d68-255a-472e-bec7-c17d89330ee1" 00:25:45.910 } 00:25:45.910 ] 00:25:45.910 } 00:25:45.910 ] 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.910 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:45.910 [2024-11-27 09:58:01.270081] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:25:45.910 [2024-11-27 09:58:01.270131] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3976622 ] 00:25:45.910 [2024-11-27 09:58:01.332007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:45.910 [2024-11-27 09:58:01.332080] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:45.910 [2024-11-27 09:58:01.332086] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:45.910 [2024-11-27 09:58:01.332105] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:45.910 [2024-11-27 09:58:01.332118] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:45.910 [2024-11-27 09:58:01.336572] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:45.910 [2024-11-27 09:58:01.336618] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e40690 0 00:25:45.910 [2024-11-27 09:58:01.344189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:45.910 [2024-11-27 09:58:01.344210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:45.910 [2024-11-27 09:58:01.344215] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:45.910 [2024-11-27 09:58:01.344219] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:45.910 [2024-11-27 09:58:01.344265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.910 [2024-11-27 09:58:01.344271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.910 [2024-11-27 09:58:01.344275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e40690) 00:25:45.910 [2024-11-27 09:58:01.344294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:45.910 [2024-11-27 09:58:01.344319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2100, cid 0, qid 0 00:25:45.910 [2024-11-27 09:58:01.352173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.910 [2024-11-27 09:58:01.352184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.910 [2024-11-27 09:58:01.352188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.910 [2024-11-27 09:58:01.352193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2100) on tqpair=0x1e40690 00:25:45.910 [2024-11-27 09:58:01.352207] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:45.910 [2024-11-27 09:58:01.352216] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:45.910 [2024-11-27 09:58:01.352221] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:45.910 [2024-11-27 09:58:01.352237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.910 [2024-11-27 09:58:01.352241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.910 [2024-11-27 09:58:01.352244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e40690) 00:25:45.910 [2024-11-27 09:58:01.352258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.910 [2024-11-27 09:58:01.352275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2100, cid 0, qid 0 00:25:45.910 [2024-11-27 09:58:01.352457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.910 [2024-11-27 09:58:01.352463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.910 [2024-11-27 09:58:01.352467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.910 [2024-11-27 09:58:01.352471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2100) on tqpair=0x1e40690 00:25:45.910 [2024-11-27 09:58:01.352477] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:45.910 [2024-11-27 09:58:01.352484] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:45.910 [2024-11-27 09:58:01.352492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.910 [2024-11-27 09:58:01.352496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.910 [2024-11-27 09:58:01.352500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e40690) 00:25:45.910 [2024-11-27 09:58:01.352507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.910 [2024-11-27 09:58:01.352517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2100, cid 0, qid 0 00:25:45.910 [2024-11-27 09:58:01.352715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.910 [2024-11-27 09:58:01.352721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.910 [2024-11-27 09:58:01.352725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.352729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2100) on tqpair=0x1e40690 00:25:45.911 [2024-11-27 09:58:01.352735] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:45.911 [2024-11-27 09:58:01.352743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:45.911 [2024-11-27 09:58:01.352750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.352754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.352757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e40690) 00:25:45.911 [2024-11-27 09:58:01.352764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.911 [2024-11-27 09:58:01.352774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2100, cid 0, qid 0 00:25:45.911 [2024-11-27 09:58:01.352951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.911 [2024-11-27 09:58:01.352958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.911 [2024-11-27 09:58:01.352961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.352965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2100) on tqpair=0x1e40690 00:25:45.911 [2024-11-27 09:58:01.352970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:45.911 [2024-11-27 09:58:01.352981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.352985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.352988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e40690) 00:25:45.911 [2024-11-27 09:58:01.352995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.911 [2024-11-27 09:58:01.353005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2100, cid 0, qid 0 00:25:45.911 [2024-11-27 09:58:01.353202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.911 [2024-11-27 09:58:01.353209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.911 [2024-11-27 09:58:01.353212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.353216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2100) on tqpair=0x1e40690 00:25:45.911 [2024-11-27 09:58:01.353221] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:45.911 [2024-11-27 09:58:01.353226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:45.911 [2024-11-27 09:58:01.353233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:45.911 [2024-11-27 09:58:01.353344] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:45.911 [2024-11-27 09:58:01.353348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:45.911 [2024-11-27 09:58:01.353357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.353361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.353365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e40690) 00:25:45.911 [2024-11-27 09:58:01.353372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.911 [2024-11-27 09:58:01.353382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2100, cid 0, qid 0 00:25:45.911 [2024-11-27 09:58:01.353592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.911 [2024-11-27 09:58:01.353598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.911 [2024-11-27 09:58:01.353602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.353606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2100) on tqpair=0x1e40690 00:25:45.911 [2024-11-27 09:58:01.353611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:45.911 [2024-11-27 09:58:01.353620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.353624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.353628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e40690) 00:25:45.911 [2024-11-27 09:58:01.353635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.911 [2024-11-27 09:58:01.353645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2100, cid 0, qid 0 00:25:45.911 [2024-11-27 09:58:01.353825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.911 [2024-11-27 09:58:01.353831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.911 [2024-11-27 09:58:01.353835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.353839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2100) on tqpair=0x1e40690 00:25:45.911 [2024-11-27 09:58:01.353843] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:45.911 [2024-11-27 09:58:01.353848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:45.911 [2024-11-27 09:58:01.353856] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:45.911 [2024-11-27 09:58:01.353868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:45.911 [2024-11-27 09:58:01.353880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.353884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e40690) 00:25:45.911 [2024-11-27 09:58:01.353891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.911 [2024-11-27 09:58:01.353903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2100, cid 0, qid 0 00:25:45.911 [2024-11-27 09:58:01.354146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.911 [2024-11-27 09:58:01.354153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.911 [2024-11-27 09:58:01.354156] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.354167] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e40690): datao=0, datal=4096, cccid=0 00:25:45.911 [2024-11-27 09:58:01.354172] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea2100) on tqpair(0x1e40690): expected_datao=0, payload_size=4096 00:25:45.911 [2024-11-27 09:58:01.354177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.354198] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.354203] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.354363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.911 [2024-11-27 09:58:01.354369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.911 [2024-11-27 09:58:01.354373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.354377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2100) on tqpair=0x1e40690 00:25:45.911 [2024-11-27 09:58:01.354387] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:45.911 [2024-11-27 09:58:01.354392] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:45.911 [2024-11-27 09:58:01.354396] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:45.911 [2024-11-27 09:58:01.354405] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:45.911 [2024-11-27 09:58:01.354410] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:45.911 [2024-11-27 09:58:01.354415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:45.911 [2024-11-27 09:58:01.354427] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:45.911 [2024-11-27 09:58:01.354434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.354438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.911 [2024-11-27 09:58:01.354441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e40690) 00:25:45.911 [2024-11-27 09:58:01.354449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:45.911 [2024-11-27 09:58:01.354461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2100, cid 0, qid 0 00:25:45.911 [2024-11-27 09:58:01.354642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.912 [2024-11-27 09:58:01.354650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.912 [2024-11-27 09:58:01.354654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.354658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2100) on tqpair=0x1e40690 00:25:45.912 [2024-11-27 09:58:01.354666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.354672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.354676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e40690) 00:25:45.912 [2024-11-27 09:58:01.354682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.912 [2024-11-27 09:58:01.354689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.354692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.354696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e40690) 00:25:45.912 [2024-11-27 09:58:01.354702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.912 [2024-11-27 09:58:01.354708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.354711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.354715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e40690) 00:25:45.912 [2024-11-27 09:58:01.354721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.912 [2024-11-27 09:58:01.354727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.354731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.354734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:45.912 [2024-11-27 09:58:01.354740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.912 [2024-11-27 09:58:01.354745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:45.912 [2024-11-27 09:58:01.354753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:45.912 [2024-11-27 09:58:01.354760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.354764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e40690) 00:25:45.912 [2024-11-27 09:58:01.354771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.912 [2024-11-27 09:58:01.354783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2100, cid 0, qid 0 00:25:45.912 [2024-11-27 09:58:01.354788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2280, cid 1, qid 0 00:25:45.912 [2024-11-27 09:58:01.354793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2400, cid 2, qid 0 00:25:45.912 [2024-11-27 09:58:01.354798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:45.912 [2024-11-27 09:58:01.354803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2700, cid 4, qid 0 00:25:45.912 [2024-11-27 09:58:01.355041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.912 [2024-11-27 09:58:01.355048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.912 [2024-11-27 09:58:01.355051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.355055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2700) on tqpair=0x1e40690 00:25:45.912 [2024-11-27 09:58:01.355064] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:45.912 [2024-11-27 09:58:01.355069] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:45.912 [2024-11-27 09:58:01.355081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.355085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e40690) 00:25:45.912 [2024-11-27 09:58:01.355094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.912 [2024-11-27 09:58:01.355105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2700, cid 4, qid 0 00:25:45.912 [2024-11-27 09:58:01.355299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.912 [2024-11-27 09:58:01.355306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.912 [2024-11-27 09:58:01.355310] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.355314] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e40690): datao=0, datal=4096, cccid=4 00:25:45.912 [2024-11-27 09:58:01.355318] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea2700) on tqpair(0x1e40690): expected_datao=0, payload_size=4096 00:25:45.912 [2024-11-27 09:58:01.355323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.355330] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.912 [2024-11-27 09:58:01.355333] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.400169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.179 [2024-11-27 09:58:01.400183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.179 [2024-11-27 09:58:01.400187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.400191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2700) on tqpair=0x1e40690 00:25:46.179 [2024-11-27 09:58:01.400208] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:46.179 [2024-11-27 09:58:01.400242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.400247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e40690) 00:25:46.179 [2024-11-27 09:58:01.400256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.179 [2024-11-27 09:58:01.400264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.400268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.400271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e40690) 00:25:46.179 [2024-11-27 09:58:01.400278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.179 [2024-11-27 09:58:01.400295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2700, cid 4, qid 0 00:25:46.179 [2024-11-27 09:58:01.400301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2880, cid 5, qid 0 00:25:46.179 [2024-11-27 09:58:01.400536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.179 [2024-11-27 09:58:01.400543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.179 [2024-11-27 09:58:01.400547] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.400551] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e40690): datao=0, datal=1024, cccid=4 00:25:46.179 [2024-11-27 09:58:01.400555] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea2700) on tqpair(0x1e40690): expected_datao=0, payload_size=1024 00:25:46.179 [2024-11-27 09:58:01.400560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.400567] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.400571] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.400577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.179 [2024-11-27 09:58:01.400583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.179 [2024-11-27 09:58:01.400586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.400590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2880) on tqpair=0x1e40690 00:25:46.179 [2024-11-27 09:58:01.442369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.179 [2024-11-27 09:58:01.442381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.179 [2024-11-27 09:58:01.442385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.442389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2700) on tqpair=0x1e40690 00:25:46.179 [2024-11-27 09:58:01.442403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.442407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e40690) 00:25:46.179 [2024-11-27 09:58:01.442414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.179 [2024-11-27 09:58:01.442430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2700, cid 4, qid 0 00:25:46.179 [2024-11-27 09:58:01.442685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.179 [2024-11-27 09:58:01.442692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.179 [2024-11-27 09:58:01.442695] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.442699] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e40690): datao=0, datal=3072, cccid=4 00:25:46.179 [2024-11-27 09:58:01.442704] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea2700) on tqpair(0x1e40690): expected_datao=0, payload_size=3072 00:25:46.179 [2024-11-27 09:58:01.442708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.442726] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.442730] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.442866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.179 [2024-11-27 09:58:01.442873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.179 [2024-11-27 09:58:01.442876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.442880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2700) on tqpair=0x1e40690 00:25:46.179 [2024-11-27 09:58:01.442889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.442893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e40690) 00:25:46.179 [2024-11-27 09:58:01.442900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.179 [2024-11-27 09:58:01.442913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2700, cid 4, qid 0 00:25:46.179 [2024-11-27 09:58:01.443199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.179 [2024-11-27 09:58:01.443205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.179 [2024-11-27 09:58:01.443209] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.443213] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e40690): datao=0, datal=8, cccid=4 00:25:46.179 [2024-11-27 09:58:01.443217] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea2700) on tqpair(0x1e40690): expected_datao=0, payload_size=8 00:25:46.179 [2024-11-27 09:58:01.443222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.443228] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.443232] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.488175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.179 [2024-11-27 09:58:01.488186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.179 [2024-11-27 09:58:01.488189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.179 [2024-11-27 09:58:01.488193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2700) on tqpair=0x1e40690 00:25:46.179 ===================================================== 00:25:46.179 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:46.179 ===================================================== 00:25:46.179 Controller Capabilities/Features 00:25:46.179 ================================ 00:25:46.179 Vendor ID: 0000 00:25:46.179 Subsystem Vendor ID: 0000 00:25:46.179 Serial Number: .................... 00:25:46.179 Model Number: ........................................ 00:25:46.179 Firmware Version: 25.01 00:25:46.179 Recommended Arb Burst: 0 00:25:46.179 IEEE OUI Identifier: 00 00 00 00:25:46.179 Multi-path I/O 00:25:46.179 May have multiple subsystem ports: No 00:25:46.179 May have multiple controllers: No 00:25:46.179 Associated with SR-IOV VF: No 00:25:46.179 Max Data Transfer Size: 131072 00:25:46.179 Max Number of Namespaces: 0 00:25:46.179 Max Number of I/O Queues: 1024 00:25:46.179 NVMe Specification Version (VS): 1.3 00:25:46.179 NVMe Specification Version (Identify): 1.3 00:25:46.179 Maximum Queue Entries: 128 00:25:46.179 Contiguous Queues Required: Yes 00:25:46.179 Arbitration Mechanisms Supported 00:25:46.179 Weighted Round Robin: Not Supported 00:25:46.179 Vendor Specific: Not Supported 00:25:46.179 Reset Timeout: 15000 ms 00:25:46.179 Doorbell Stride: 4 bytes 00:25:46.179 NVM Subsystem Reset: Not Supported 00:25:46.179 Command Sets Supported 00:25:46.179 NVM Command Set: Supported 00:25:46.179 Boot Partition: Not Supported 00:25:46.179 Memory Page Size Minimum: 4096 bytes 00:25:46.179 Memory Page Size Maximum: 4096 bytes 00:25:46.179 Persistent Memory Region: Not Supported 00:25:46.179 Optional Asynchronous Events Supported 00:25:46.179 Namespace Attribute Notices: Not Supported 00:25:46.179 Firmware Activation Notices: Not Supported 00:25:46.179 ANA Change Notices: Not Supported 00:25:46.179 PLE Aggregate Log Change Notices: Not Supported 00:25:46.179 LBA Status Info Alert Notices: Not Supported 00:25:46.179 EGE Aggregate Log Change Notices: Not Supported 00:25:46.179 Normal NVM Subsystem Shutdown event: Not Supported 00:25:46.179 Zone Descriptor Change Notices: Not Supported 00:25:46.179 Discovery Log Change Notices: Supported 00:25:46.179 Controller Attributes 00:25:46.179 128-bit Host Identifier: Not Supported 00:25:46.179 Non-Operational Permissive Mode: Not Supported 00:25:46.179 NVM Sets: Not Supported 00:25:46.179 Read Recovery Levels: Not Supported 00:25:46.180 Endurance Groups: Not Supported 00:25:46.180 Predictable Latency Mode: Not Supported 00:25:46.180 Traffic Based Keep ALive: Not Supported 00:25:46.180 Namespace Granularity: Not Supported 00:25:46.180 SQ Associations: Not Supported 00:25:46.180 UUID List: Not Supported 00:25:46.180 Multi-Domain Subsystem: Not Supported 00:25:46.180 Fixed Capacity Management: Not Supported 00:25:46.180 Variable Capacity Management: Not Supported 00:25:46.180 Delete Endurance Group: Not Supported 00:25:46.180 Delete NVM Set: Not Supported 00:25:46.180 Extended LBA Formats Supported: Not Supported 00:25:46.180 Flexible Data Placement Supported: Not Supported 00:25:46.180 00:25:46.180 Controller Memory Buffer Support 00:25:46.180 ================================ 00:25:46.180 Supported: No 00:25:46.180 00:25:46.180 Persistent Memory Region Support 00:25:46.180 ================================ 00:25:46.180 Supported: No 00:25:46.180 00:25:46.180 Admin Command Set Attributes 00:25:46.180 ============================ 00:25:46.180 Security Send/Receive: Not Supported 00:25:46.180 Format NVM: Not Supported 00:25:46.180 Firmware Activate/Download: Not Supported 00:25:46.180 Namespace Management: Not Supported 00:25:46.180 Device Self-Test: Not Supported 00:25:46.180 Directives: Not Supported 00:25:46.180 NVMe-MI: Not Supported 00:25:46.180 Virtualization Management: Not Supported 00:25:46.180 Doorbell Buffer Config: Not Supported 00:25:46.180 Get LBA Status Capability: Not Supported 00:25:46.180 Command & Feature Lockdown Capability: Not Supported 00:25:46.180 Abort Command Limit: 1 00:25:46.180 Async Event Request Limit: 4 00:25:46.180 Number of Firmware Slots: N/A 00:25:46.180 Firmware Slot 1 Read-Only: N/A 00:25:46.180 Firmware Activation Without Reset: N/A 00:25:46.180 Multiple Update Detection Support: N/A 00:25:46.180 Firmware Update Granularity: No Information Provided 00:25:46.180 Per-Namespace SMART Log: No 00:25:46.180 Asymmetric Namespace Access Log Page: Not Supported 00:25:46.180 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:46.180 Command Effects Log Page: Not Supported 00:25:46.180 Get Log Page Extended Data: Supported 00:25:46.180 Telemetry Log Pages: Not Supported 00:25:46.180 Persistent Event Log Pages: Not Supported 00:25:46.180 Supported Log Pages Log Page: May Support 00:25:46.180 Commands Supported & Effects Log Page: Not Supported 00:25:46.180 Feature Identifiers & Effects Log Page:May Support 00:25:46.180 NVMe-MI Commands & Effects Log Page: May Support 00:25:46.180 Data Area 4 for Telemetry Log: Not Supported 00:25:46.180 Error Log Page Entries Supported: 128 00:25:46.180 Keep Alive: Not Supported 00:25:46.180 00:25:46.180 NVM Command Set Attributes 00:25:46.180 ========================== 00:25:46.180 Submission Queue Entry Size 00:25:46.180 Max: 1 00:25:46.180 Min: 1 00:25:46.180 Completion Queue Entry Size 00:25:46.180 Max: 1 00:25:46.180 Min: 1 00:25:46.180 Number of Namespaces: 0 00:25:46.180 Compare Command: Not Supported 00:25:46.180 Write Uncorrectable Command: Not Supported 00:25:46.180 Dataset Management Command: Not Supported 00:25:46.180 Write Zeroes Command: Not Supported 00:25:46.180 Set Features Save Field: Not Supported 00:25:46.180 Reservations: Not Supported 00:25:46.180 Timestamp: Not Supported 00:25:46.180 Copy: Not Supported 00:25:46.180 Volatile Write Cache: Not Present 00:25:46.180 Atomic Write Unit (Normal): 1 00:25:46.180 Atomic Write Unit (PFail): 1 00:25:46.180 Atomic Compare & Write Unit: 1 00:25:46.180 Fused Compare & Write: Supported 00:25:46.180 Scatter-Gather List 00:25:46.180 SGL Command Set: Supported 00:25:46.180 SGL Keyed: Supported 00:25:46.180 SGL Bit Bucket Descriptor: Not Supported 00:25:46.180 SGL Metadata Pointer: Not Supported 00:25:46.180 Oversized SGL: Not Supported 00:25:46.180 SGL Metadata Address: Not Supported 00:25:46.180 SGL Offset: Supported 00:25:46.180 Transport SGL Data Block: Not Supported 00:25:46.180 Replay Protected Memory Block: Not Supported 00:25:46.180 00:25:46.180 Firmware Slot Information 00:25:46.180 ========================= 00:25:46.180 Active slot: 0 00:25:46.180 00:25:46.180 00:25:46.180 Error Log 00:25:46.180 ========= 00:25:46.180 00:25:46.180 Active Namespaces 00:25:46.180 ================= 00:25:46.180 Discovery Log Page 00:25:46.180 ================== 00:25:46.180 Generation Counter: 2 00:25:46.180 Number of Records: 2 00:25:46.180 Record Format: 0 00:25:46.180 00:25:46.180 Discovery Log Entry 0 00:25:46.180 ---------------------- 00:25:46.180 Transport Type: 3 (TCP) 00:25:46.180 Address Family: 1 (IPv4) 00:25:46.180 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:46.180 Entry Flags: 00:25:46.180 Duplicate Returned Information: 1 00:25:46.180 Explicit Persistent Connection Support for Discovery: 1 00:25:46.180 Transport Requirements: 00:25:46.180 Secure Channel: Not Required 00:25:46.180 Port ID: 0 (0x0000) 00:25:46.180 Controller ID: 65535 (0xffff) 00:25:46.180 Admin Max SQ Size: 128 00:25:46.180 Transport Service Identifier: 4420 00:25:46.180 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:46.180 Transport Address: 10.0.0.2 00:25:46.180 Discovery Log Entry 1 00:25:46.180 ---------------------- 00:25:46.180 Transport Type: 3 (TCP) 00:25:46.180 Address Family: 1 (IPv4) 00:25:46.180 Subsystem Type: 2 (NVM Subsystem) 00:25:46.180 Entry Flags: 00:25:46.180 Duplicate Returned Information: 0 00:25:46.180 Explicit Persistent Connection Support for Discovery: 0 00:25:46.180 Transport Requirements: 00:25:46.180 Secure Channel: Not Required 00:25:46.180 Port ID: 0 (0x0000) 00:25:46.180 Controller ID: 65535 (0xffff) 00:25:46.180 Admin Max SQ Size: 128 00:25:46.180 Transport Service Identifier: 4420 00:25:46.180 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:46.180 Transport Address: 10.0.0.2 [2024-11-27 09:58:01.488300] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:46.180 [2024-11-27 09:58:01.488312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2100) on tqpair=0x1e40690 00:25:46.180 [2024-11-27 09:58:01.488319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.180 [2024-11-27 09:58:01.488325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2280) on tqpair=0x1e40690 00:25:46.180 [2024-11-27 09:58:01.488330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.180 [2024-11-27 09:58:01.488335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2400) on tqpair=0x1e40690 00:25:46.180 [2024-11-27 09:58:01.488340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.180 [2024-11-27 09:58:01.488344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.180 [2024-11-27 09:58:01.488349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.180 [2024-11-27 09:58:01.488362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.180 [2024-11-27 09:58:01.488366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.180 [2024-11-27 09:58:01.488370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.180 [2024-11-27 09:58:01.488378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.180 [2024-11-27 09:58:01.488393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.180 [2024-11-27 09:58:01.488552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.180 [2024-11-27 09:58:01.488559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.180 [2024-11-27 09:58:01.488562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.180 [2024-11-27 09:58:01.488566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.180 [2024-11-27 09:58:01.488574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.180 [2024-11-27 09:58:01.488578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.180 [2024-11-27 09:58:01.488581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.180 [2024-11-27 09:58:01.488588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.180 [2024-11-27 09:58:01.488602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.180 [2024-11-27 09:58:01.488806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.180 [2024-11-27 09:58:01.488812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.180 [2024-11-27 09:58:01.488816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.180 [2024-11-27 09:58:01.488819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.181 [2024-11-27 09:58:01.488824] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:46.181 [2024-11-27 09:58:01.488830] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:46.181 [2024-11-27 09:58:01.488840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.488844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.488848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.181 [2024-11-27 09:58:01.488855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.181 [2024-11-27 09:58:01.488865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.181 [2024-11-27 09:58:01.489039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.181 [2024-11-27 09:58:01.489045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.181 [2024-11-27 09:58:01.489049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.489053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.181 [2024-11-27 09:58:01.489064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.489068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.489072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.181 [2024-11-27 09:58:01.489079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.181 [2024-11-27 09:58:01.489089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.181 [2024-11-27 09:58:01.489279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.181 [2024-11-27 09:58:01.489286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.181 [2024-11-27 09:58:01.489290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.489294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.181 [2024-11-27 09:58:01.489304] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.489308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.489311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.181 [2024-11-27 09:58:01.489318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.181 [2024-11-27 09:58:01.489329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.181 [2024-11-27 09:58:01.489535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.181 [2024-11-27 09:58:01.489542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.181 [2024-11-27 09:58:01.489545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.489549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.181 [2024-11-27 09:58:01.489559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.489563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.489566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.181 [2024-11-27 09:58:01.489573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.181 [2024-11-27 09:58:01.489583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.181 [2024-11-27 09:58:01.489764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.181 [2024-11-27 09:58:01.489770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.181 [2024-11-27 09:58:01.489774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.489777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.181 [2024-11-27 09:58:01.489787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.489791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.489795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.181 [2024-11-27 09:58:01.489802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.181 [2024-11-27 09:58:01.489813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.181 [2024-11-27 09:58:01.489973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.181 [2024-11-27 09:58:01.489982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.181 [2024-11-27 09:58:01.489985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.489989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.181 [2024-11-27 09:58:01.489999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.181 [2024-11-27 09:58:01.490013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.181 [2024-11-27 09:58:01.490024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.181 [2024-11-27 09:58:01.490209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.181 [2024-11-27 09:58:01.490217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.181 [2024-11-27 09:58:01.490221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.181 [2024-11-27 09:58:01.490235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.181 [2024-11-27 09:58:01.490249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.181 [2024-11-27 09:58:01.490260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.181 [2024-11-27 09:58:01.490475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.181 [2024-11-27 09:58:01.490482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.181 [2024-11-27 09:58:01.490485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.181 [2024-11-27 09:58:01.490499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.181 [2024-11-27 09:58:01.490513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.181 [2024-11-27 09:58:01.490524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.181 [2024-11-27 09:58:01.490694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.181 [2024-11-27 09:58:01.490700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.181 [2024-11-27 09:58:01.490704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.181 [2024-11-27 09:58:01.490717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.181 [2024-11-27 09:58:01.490732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.181 [2024-11-27 09:58:01.490742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.181 [2024-11-27 09:58:01.490917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.181 [2024-11-27 09:58:01.490924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.181 [2024-11-27 09:58:01.490930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.181 [2024-11-27 09:58:01.490944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.490952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.181 [2024-11-27 09:58:01.490959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.181 [2024-11-27 09:58:01.490969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.181 [2024-11-27 09:58:01.491135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.181 [2024-11-27 09:58:01.491142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.181 [2024-11-27 09:58:01.491145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.491149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.181 [2024-11-27 09:58:01.491166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.491171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.491175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.181 [2024-11-27 09:58:01.491181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.181 [2024-11-27 09:58:01.491192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.181 [2024-11-27 09:58:01.491381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.181 [2024-11-27 09:58:01.491387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.181 [2024-11-27 09:58:01.491391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.491395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.181 [2024-11-27 09:58:01.491405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.491409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.181 [2024-11-27 09:58:01.491412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.181 [2024-11-27 09:58:01.491419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.181 [2024-11-27 09:58:01.491429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.182 [2024-11-27 09:58:01.491674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.182 [2024-11-27 09:58:01.491680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.182 [2024-11-27 09:58:01.491684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.491688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.182 [2024-11-27 09:58:01.491699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.491703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.491706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.182 [2024-11-27 09:58:01.491713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.182 [2024-11-27 09:58:01.491723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.182 [2024-11-27 09:58:01.491898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.182 [2024-11-27 09:58:01.491904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.182 [2024-11-27 09:58:01.491907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.491914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.182 [2024-11-27 09:58:01.491924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.491928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.491931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.182 [2024-11-27 09:58:01.491938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.182 [2024-11-27 09:58:01.491949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.182 [2024-11-27 09:58:01.496172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.182 [2024-11-27 09:58:01.496182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.182 [2024-11-27 09:58:01.496185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.496189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.182 [2024-11-27 09:58:01.496199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.496204] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.496207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e40690) 00:25:46.182 [2024-11-27 09:58:01.496214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.182 [2024-11-27 09:58:01.496225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea2580, cid 3, qid 0 00:25:46.182 [2024-11-27 09:58:01.496450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.182 [2024-11-27 09:58:01.496457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.182 [2024-11-27 09:58:01.496460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.496464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ea2580) on tqpair=0x1e40690 00:25:46.182 [2024-11-27 09:58:01.496473] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:25:46.182 00:25:46.182 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:46.182 [2024-11-27 09:58:01.543407] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:25:46.182 [2024-11-27 09:58:01.543452] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3976738 ] 00:25:46.182 [2024-11-27 09:58:01.599717] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:46.182 [2024-11-27 09:58:01.599779] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:46.182 [2024-11-27 09:58:01.599784] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:46.182 [2024-11-27 09:58:01.599804] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:46.182 [2024-11-27 09:58:01.599816] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:46.182 [2024-11-27 09:58:01.603460] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:46.182 [2024-11-27 09:58:01.603499] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e02690 0 00:25:46.182 [2024-11-27 09:58:01.611175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:46.182 [2024-11-27 09:58:01.611191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:46.182 [2024-11-27 09:58:01.611196] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:46.182 [2024-11-27 09:58:01.611199] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:46.182 [2024-11-27 09:58:01.611236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.611241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.611245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e02690) 00:25:46.182 [2024-11-27 09:58:01.611260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:46.182 [2024-11-27 09:58:01.611284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64100, cid 0, qid 0 00:25:46.182 [2024-11-27 09:58:01.618174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.182 [2024-11-27 09:58:01.618185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.182 [2024-11-27 09:58:01.618189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.618194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64100) on tqpair=0x1e02690 00:25:46.182 [2024-11-27 09:58:01.618207] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:46.182 [2024-11-27 09:58:01.618215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:46.182 [2024-11-27 09:58:01.618221] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:46.182 [2024-11-27 09:58:01.618235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.618240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.618243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e02690) 00:25:46.182 [2024-11-27 09:58:01.618252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.182 [2024-11-27 09:58:01.618268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64100, cid 0, qid 0 00:25:46.182 [2024-11-27 09:58:01.618457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.182 [2024-11-27 09:58:01.618463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.182 [2024-11-27 09:58:01.618467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.618471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64100) on tqpair=0x1e02690 00:25:46.182 [2024-11-27 09:58:01.618476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:46.182 [2024-11-27 09:58:01.618483] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:46.182 [2024-11-27 09:58:01.618491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.618494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.618498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e02690) 00:25:46.182 [2024-11-27 09:58:01.618505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.182 [2024-11-27 09:58:01.618515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64100, cid 0, qid 0 00:25:46.182 [2024-11-27 09:58:01.618717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.182 [2024-11-27 09:58:01.618725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.182 [2024-11-27 09:58:01.618728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.618732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64100) on tqpair=0x1e02690 00:25:46.182 [2024-11-27 09:58:01.618742] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:46.182 [2024-11-27 09:58:01.618752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:46.182 [2024-11-27 09:58:01.618759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.618762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.618766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e02690) 00:25:46.182 [2024-11-27 09:58:01.618773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.182 [2024-11-27 09:58:01.618783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64100, cid 0, qid 0 00:25:46.182 [2024-11-27 09:58:01.619016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.182 [2024-11-27 09:58:01.619023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.182 [2024-11-27 09:58:01.619027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.619030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64100) on tqpair=0x1e02690 00:25:46.182 [2024-11-27 09:58:01.619036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:46.182 [2024-11-27 09:58:01.619045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.619049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.619053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e02690) 00:25:46.182 [2024-11-27 09:58:01.619060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.182 [2024-11-27 09:58:01.619070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64100, cid 0, qid 0 00:25:46.182 [2024-11-27 09:58:01.619271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.182 [2024-11-27 09:58:01.619278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.182 [2024-11-27 09:58:01.619281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.182 [2024-11-27 09:58:01.619285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64100) on tqpair=0x1e02690 00:25:46.183 [2024-11-27 09:58:01.619290] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:46.183 [2024-11-27 09:58:01.619294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:46.183 [2024-11-27 09:58:01.619302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:46.183 [2024-11-27 09:58:01.619411] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:46.183 [2024-11-27 09:58:01.619416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:46.183 [2024-11-27 09:58:01.619424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.619428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.619432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e02690) 00:25:46.183 [2024-11-27 09:58:01.619438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.183 [2024-11-27 09:58:01.619449] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64100, cid 0, qid 0 00:25:46.183 [2024-11-27 09:58:01.619632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.183 [2024-11-27 09:58:01.619638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.183 [2024-11-27 09:58:01.619644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.619648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64100) on tqpair=0x1e02690 00:25:46.183 [2024-11-27 09:58:01.619653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:46.183 [2024-11-27 09:58:01.619662] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.619666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.619670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e02690) 00:25:46.183 [2024-11-27 09:58:01.619677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.183 [2024-11-27 09:58:01.619687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64100, cid 0, qid 0 00:25:46.183 [2024-11-27 09:58:01.619919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.183 [2024-11-27 09:58:01.619925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.183 [2024-11-27 09:58:01.619928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.619932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64100) on tqpair=0x1e02690 00:25:46.183 [2024-11-27 09:58:01.619937] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:46.183 [2024-11-27 09:58:01.619941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:46.183 [2024-11-27 09:58:01.619949] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:46.183 [2024-11-27 09:58:01.619957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:46.183 [2024-11-27 09:58:01.619967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.619970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e02690) 00:25:46.183 [2024-11-27 09:58:01.619977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.183 [2024-11-27 09:58:01.619988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64100, cid 0, qid 0 00:25:46.183 [2024-11-27 09:58:01.620299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.183 [2024-11-27 09:58:01.620305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.183 [2024-11-27 09:58:01.620309] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620313] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e02690): datao=0, datal=4096, cccid=0 00:25:46.183 [2024-11-27 09:58:01.620318] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e64100) on tqpair(0x1e02690): expected_datao=0, payload_size=4096 00:25:46.183 [2024-11-27 09:58:01.620322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620330] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620334] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.183 [2024-11-27 09:58:01.620480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.183 [2024-11-27 09:58:01.620484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64100) on tqpair=0x1e02690 00:25:46.183 [2024-11-27 09:58:01.620495] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:46.183 [2024-11-27 09:58:01.620505] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:46.183 [2024-11-27 09:58:01.620510] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:46.183 [2024-11-27 09:58:01.620517] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:46.183 [2024-11-27 09:58:01.620522] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:46.183 [2024-11-27 09:58:01.620527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:46.183 [2024-11-27 09:58:01.620538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:46.183 [2024-11-27 09:58:01.620545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e02690) 00:25:46.183 [2024-11-27 09:58:01.620560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:46.183 [2024-11-27 09:58:01.620571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64100, cid 0, qid 0 00:25:46.183 [2024-11-27 09:58:01.620776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.183 [2024-11-27 09:58:01.620785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.183 [2024-11-27 09:58:01.620788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64100) on tqpair=0x1e02690 00:25:46.183 [2024-11-27 09:58:01.620799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e02690) 00:25:46.183 [2024-11-27 09:58:01.620812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.183 [2024-11-27 09:58:01.620819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e02690) 00:25:46.183 [2024-11-27 09:58:01.620832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.183 [2024-11-27 09:58:01.620838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e02690) 00:25:46.183 [2024-11-27 09:58:01.620851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.183 [2024-11-27 09:58:01.620857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e02690) 00:25:46.183 [2024-11-27 09:58:01.620870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.183 [2024-11-27 09:58:01.620874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:46.183 [2024-11-27 09:58:01.620884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:46.183 [2024-11-27 09:58:01.620893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.183 [2024-11-27 09:58:01.620896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e02690) 00:25:46.183 [2024-11-27 09:58:01.620903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.183 [2024-11-27 09:58:01.620915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64100, cid 0, qid 0 00:25:46.183 [2024-11-27 09:58:01.620920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64280, cid 1, qid 0 00:25:46.183 [2024-11-27 09:58:01.620925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64400, cid 2, qid 0 00:25:46.184 [2024-11-27 09:58:01.620930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64580, cid 3, qid 0 00:25:46.184 [2024-11-27 09:58:01.620934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64700, cid 4, qid 0 00:25:46.184 [2024-11-27 09:58:01.621203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.184 [2024-11-27 09:58:01.621210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.184 [2024-11-27 09:58:01.621214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.621218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64700) on tqpair=0x1e02690 00:25:46.184 [2024-11-27 09:58:01.621226] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:46.184 [2024-11-27 09:58:01.621231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.621240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.621246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.621252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.621256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.621260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e02690) 00:25:46.184 [2024-11-27 09:58:01.621266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:46.184 [2024-11-27 09:58:01.621277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64700, cid 4, qid 0 00:25:46.184 [2024-11-27 09:58:01.621452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.184 [2024-11-27 09:58:01.621458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.184 [2024-11-27 09:58:01.621462] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.621465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64700) on tqpair=0x1e02690 00:25:46.184 [2024-11-27 09:58:01.621535] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.621544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.621552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.621556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e02690) 00:25:46.184 [2024-11-27 09:58:01.621562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.184 [2024-11-27 09:58:01.621573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64700, cid 4, qid 0 00:25:46.184 [2024-11-27 09:58:01.621811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.184 [2024-11-27 09:58:01.621820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.184 [2024-11-27 09:58:01.621823] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.621827] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e02690): datao=0, datal=4096, cccid=4 00:25:46.184 [2024-11-27 09:58:01.621832] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e64700) on tqpair(0x1e02690): expected_datao=0, payload_size=4096 00:25:46.184 [2024-11-27 09:58:01.621836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.621843] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.621847] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.621997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.184 [2024-11-27 09:58:01.622003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.184 [2024-11-27 09:58:01.622007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.622011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64700) on tqpair=0x1e02690 00:25:46.184 [2024-11-27 09:58:01.622020] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:46.184 [2024-11-27 09:58:01.622035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.622045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.622052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.622056] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e02690) 00:25:46.184 [2024-11-27 09:58:01.622062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.184 [2024-11-27 09:58:01.622073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64700, cid 4, qid 0 00:25:46.184 [2024-11-27 09:58:01.626173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.184 [2024-11-27 09:58:01.626182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.184 [2024-11-27 09:58:01.626186] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626189] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e02690): datao=0, datal=4096, cccid=4 00:25:46.184 [2024-11-27 09:58:01.626194] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e64700) on tqpair(0x1e02690): expected_datao=0, payload_size=4096 00:25:46.184 [2024-11-27 09:58:01.626198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626205] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626208] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.184 [2024-11-27 09:58:01.626220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.184 [2024-11-27 09:58:01.626223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64700) on tqpair=0x1e02690 00:25:46.184 [2024-11-27 09:58:01.626242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.626253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.626260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e02690) 00:25:46.184 [2024-11-27 09:58:01.626270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.184 [2024-11-27 09:58:01.626286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64700, cid 4, qid 0 00:25:46.184 [2024-11-27 09:58:01.626519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.184 [2024-11-27 09:58:01.626525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.184 [2024-11-27 09:58:01.626529] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626532] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e02690): datao=0, datal=4096, cccid=4 00:25:46.184 [2024-11-27 09:58:01.626537] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e64700) on tqpair(0x1e02690): expected_datao=0, payload_size=4096 00:25:46.184 [2024-11-27 09:58:01.626541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626548] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626551] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.184 [2024-11-27 09:58:01.626677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.184 [2024-11-27 09:58:01.626681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64700) on tqpair=0x1e02690 00:25:46.184 [2024-11-27 09:58:01.626693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.626701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.626710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.626716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.626722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.626727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.626732] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:46.184 [2024-11-27 09:58:01.626737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:46.184 [2024-11-27 09:58:01.626742] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:46.184 [2024-11-27 09:58:01.626760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e02690) 00:25:46.184 [2024-11-27 09:58:01.626770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.184 [2024-11-27 09:58:01.626777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.626784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e02690) 00:25:46.184 [2024-11-27 09:58:01.626791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.184 [2024-11-27 09:58:01.626804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64700, cid 4, qid 0 00:25:46.184 [2024-11-27 09:58:01.626810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64880, cid 5, qid 0 00:25:46.184 [2024-11-27 09:58:01.627030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.184 [2024-11-27 09:58:01.627036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.184 [2024-11-27 09:58:01.627040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.184 [2024-11-27 09:58:01.627043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64700) on tqpair=0x1e02690 00:25:46.184 [2024-11-27 09:58:01.627050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.185 [2024-11-27 09:58:01.627056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.185 [2024-11-27 09:58:01.627060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.627063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64880) on tqpair=0x1e02690 00:25:46.185 [2024-11-27 09:58:01.627073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.627076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e02690) 00:25:46.185 [2024-11-27 09:58:01.627083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.185 [2024-11-27 09:58:01.627093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64880, cid 5, qid 0 00:25:46.185 [2024-11-27 09:58:01.627331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.185 [2024-11-27 09:58:01.627337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.185 [2024-11-27 09:58:01.627341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.627345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64880) on tqpair=0x1e02690 00:25:46.185 [2024-11-27 09:58:01.627354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.627358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e02690) 00:25:46.185 [2024-11-27 09:58:01.627364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.185 [2024-11-27 09:58:01.627374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64880, cid 5, qid 0 00:25:46.185 [2024-11-27 09:58:01.627636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.185 [2024-11-27 09:58:01.627645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.185 [2024-11-27 09:58:01.627648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.627652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64880) on tqpair=0x1e02690 00:25:46.185 [2024-11-27 09:58:01.627661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.627665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e02690) 00:25:46.185 [2024-11-27 09:58:01.627671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.185 [2024-11-27 09:58:01.627681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64880, cid 5, qid 0 00:25:46.185 [2024-11-27 09:58:01.627870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.185 [2024-11-27 09:58:01.627877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.185 [2024-11-27 09:58:01.627880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.627884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64880) on tqpair=0x1e02690 00:25:46.185 [2024-11-27 09:58:01.627899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.627904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e02690) 00:25:46.185 [2024-11-27 09:58:01.627910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.185 [2024-11-27 09:58:01.627918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.627923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e02690) 00:25:46.185 [2024-11-27 09:58:01.627930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.185 [2024-11-27 09:58:01.627937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.627941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e02690) 00:25:46.185 [2024-11-27 09:58:01.627947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.185 [2024-11-27 09:58:01.627955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.627958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e02690) 00:25:46.185 [2024-11-27 09:58:01.627965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.185 [2024-11-27 09:58:01.627976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64880, cid 5, qid 0 00:25:46.185 [2024-11-27 09:58:01.627981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64700, cid 4, qid 0 00:25:46.185 [2024-11-27 09:58:01.627986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64a00, cid 6, qid 0 00:25:46.185 [2024-11-27 09:58:01.627990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64b80, cid 7, qid 0 00:25:46.185 [2024-11-27 09:58:01.628293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.185 [2024-11-27 09:58:01.628300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.185 [2024-11-27 09:58:01.628303] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628307] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e02690): datao=0, datal=8192, cccid=5 00:25:46.185 [2024-11-27 09:58:01.628312] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e64880) on tqpair(0x1e02690): expected_datao=0, payload_size=8192 00:25:46.185 [2024-11-27 09:58:01.628316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628417] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628421] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.185 [2024-11-27 09:58:01.628432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.185 [2024-11-27 09:58:01.628436] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628439] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e02690): datao=0, datal=512, cccid=4 00:25:46.185 [2024-11-27 09:58:01.628444] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e64700) on tqpair(0x1e02690): expected_datao=0, payload_size=512 00:25:46.185 [2024-11-27 09:58:01.628448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628454] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628458] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.185 [2024-11-27 09:58:01.628469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.185 [2024-11-27 09:58:01.628472] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628476] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e02690): datao=0, datal=512, cccid=6 00:25:46.185 [2024-11-27 09:58:01.628480] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e64a00) on tqpair(0x1e02690): expected_datao=0, payload_size=512 00:25:46.185 [2024-11-27 09:58:01.628485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628496] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628499] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.185 [2024-11-27 09:58:01.628511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.185 [2024-11-27 09:58:01.628514] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628518] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e02690): datao=0, datal=4096, cccid=7 00:25:46.185 [2024-11-27 09:58:01.628522] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e64b80) on tqpair(0x1e02690): expected_datao=0, payload_size=4096 00:25:46.185 [2024-11-27 09:58:01.628526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628543] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628547] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.185 [2024-11-27 09:58:01.628706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.185 [2024-11-27 09:58:01.628709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64880) on tqpair=0x1e02690 00:25:46.185 [2024-11-27 09:58:01.628725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.185 [2024-11-27 09:58:01.628731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.185 [2024-11-27 09:58:01.628735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64700) on tqpair=0x1e02690 00:25:46.185 [2024-11-27 09:58:01.628749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.185 [2024-11-27 09:58:01.628755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.185 [2024-11-27 09:58:01.628758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64a00) on tqpair=0x1e02690 00:25:46.185 [2024-11-27 09:58:01.628769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.185 [2024-11-27 09:58:01.628775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.185 [2024-11-27 09:58:01.628778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.185 [2024-11-27 09:58:01.628782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64b80) on tqpair=0x1e02690 00:25:46.185 ===================================================== 00:25:46.185 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:46.185 ===================================================== 00:25:46.185 Controller Capabilities/Features 00:25:46.185 ================================ 00:25:46.185 Vendor ID: 8086 00:25:46.185 Subsystem Vendor ID: 8086 00:25:46.185 Serial Number: SPDK00000000000001 00:25:46.185 Model Number: SPDK bdev Controller 00:25:46.185 Firmware Version: 25.01 00:25:46.185 Recommended Arb Burst: 6 00:25:46.185 IEEE OUI Identifier: e4 d2 5c 00:25:46.185 Multi-path I/O 00:25:46.185 May have multiple subsystem ports: Yes 00:25:46.185 May have multiple controllers: Yes 00:25:46.185 Associated with SR-IOV VF: No 00:25:46.185 Max Data Transfer Size: 131072 00:25:46.185 Max Number of Namespaces: 32 00:25:46.185 Max Number of I/O Queues: 127 00:25:46.185 NVMe Specification Version (VS): 1.3 00:25:46.185 NVMe Specification Version (Identify): 1.3 00:25:46.185 Maximum Queue Entries: 128 00:25:46.185 Contiguous Queues Required: Yes 00:25:46.185 Arbitration Mechanisms Supported 00:25:46.185 Weighted Round Robin: Not Supported 00:25:46.185 Vendor Specific: Not Supported 00:25:46.186 Reset Timeout: 15000 ms 00:25:46.186 Doorbell Stride: 4 bytes 00:25:46.186 NVM Subsystem Reset: Not Supported 00:25:46.186 Command Sets Supported 00:25:46.186 NVM Command Set: Supported 00:25:46.186 Boot Partition: Not Supported 00:25:46.186 Memory Page Size Minimum: 4096 bytes 00:25:46.186 Memory Page Size Maximum: 4096 bytes 00:25:46.186 Persistent Memory Region: Not Supported 00:25:46.186 Optional Asynchronous Events Supported 00:25:46.186 Namespace Attribute Notices: Supported 00:25:46.186 Firmware Activation Notices: Not Supported 00:25:46.186 ANA Change Notices: Not Supported 00:25:46.186 PLE Aggregate Log Change Notices: Not Supported 00:25:46.186 LBA Status Info Alert Notices: Not Supported 00:25:46.186 EGE Aggregate Log Change Notices: Not Supported 00:25:46.186 Normal NVM Subsystem Shutdown event: Not Supported 00:25:46.186 Zone Descriptor Change Notices: Not Supported 00:25:46.186 Discovery Log Change Notices: Not Supported 00:25:46.186 Controller Attributes 00:25:46.186 128-bit Host Identifier: Supported 00:25:46.186 Non-Operational Permissive Mode: Not Supported 00:25:46.186 NVM Sets: Not Supported 00:25:46.186 Read Recovery Levels: Not Supported 00:25:46.186 Endurance Groups: Not Supported 00:25:46.186 Predictable Latency Mode: Not Supported 00:25:46.186 Traffic Based Keep ALive: Not Supported 00:25:46.186 Namespace Granularity: Not Supported 00:25:46.186 SQ Associations: Not Supported 00:25:46.186 UUID List: Not Supported 00:25:46.186 Multi-Domain Subsystem: Not Supported 00:25:46.186 Fixed Capacity Management: Not Supported 00:25:46.186 Variable Capacity Management: Not Supported 00:25:46.186 Delete Endurance Group: Not Supported 00:25:46.186 Delete NVM Set: Not Supported 00:25:46.186 Extended LBA Formats Supported: Not Supported 00:25:46.186 Flexible Data Placement Supported: Not Supported 00:25:46.186 00:25:46.186 Controller Memory Buffer Support 00:25:46.186 ================================ 00:25:46.186 Supported: No 00:25:46.186 00:25:46.186 Persistent Memory Region Support 00:25:46.186 ================================ 00:25:46.186 Supported: No 00:25:46.186 00:25:46.186 Admin Command Set Attributes 00:25:46.186 ============================ 00:25:46.186 Security Send/Receive: Not Supported 00:25:46.186 Format NVM: Not Supported 00:25:46.186 Firmware Activate/Download: Not Supported 00:25:46.186 Namespace Management: Not Supported 00:25:46.186 Device Self-Test: Not Supported 00:25:46.186 Directives: Not Supported 00:25:46.186 NVMe-MI: Not Supported 00:25:46.186 Virtualization Management: Not Supported 00:25:46.186 Doorbell Buffer Config: Not Supported 00:25:46.186 Get LBA Status Capability: Not Supported 00:25:46.186 Command & Feature Lockdown Capability: Not Supported 00:25:46.186 Abort Command Limit: 4 00:25:46.186 Async Event Request Limit: 4 00:25:46.186 Number of Firmware Slots: N/A 00:25:46.186 Firmware Slot 1 Read-Only: N/A 00:25:46.186 Firmware Activation Without Reset: N/A 00:25:46.186 Multiple Update Detection Support: N/A 00:25:46.186 Firmware Update Granularity: No Information Provided 00:25:46.186 Per-Namespace SMART Log: No 00:25:46.186 Asymmetric Namespace Access Log Page: Not Supported 00:25:46.186 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:46.186 Command Effects Log Page: Supported 00:25:46.186 Get Log Page Extended Data: Supported 00:25:46.186 Telemetry Log Pages: Not Supported 00:25:46.186 Persistent Event Log Pages: Not Supported 00:25:46.186 Supported Log Pages Log Page: May Support 00:25:46.186 Commands Supported & Effects Log Page: Not Supported 00:25:46.186 Feature Identifiers & Effects Log Page:May Support 00:25:46.186 NVMe-MI Commands & Effects Log Page: May Support 00:25:46.186 Data Area 4 for Telemetry Log: Not Supported 00:25:46.186 Error Log Page Entries Supported: 128 00:25:46.186 Keep Alive: Supported 00:25:46.186 Keep Alive Granularity: 10000 ms 00:25:46.186 00:25:46.186 NVM Command Set Attributes 00:25:46.186 ========================== 00:25:46.186 Submission Queue Entry Size 00:25:46.186 Max: 64 00:25:46.186 Min: 64 00:25:46.186 Completion Queue Entry Size 00:25:46.186 Max: 16 00:25:46.186 Min: 16 00:25:46.186 Number of Namespaces: 32 00:25:46.186 Compare Command: Supported 00:25:46.186 Write Uncorrectable Command: Not Supported 00:25:46.186 Dataset Management Command: Supported 00:25:46.186 Write Zeroes Command: Supported 00:25:46.186 Set Features Save Field: Not Supported 00:25:46.186 Reservations: Supported 00:25:46.186 Timestamp: Not Supported 00:25:46.186 Copy: Supported 00:25:46.186 Volatile Write Cache: Present 00:25:46.186 Atomic Write Unit (Normal): 1 00:25:46.186 Atomic Write Unit (PFail): 1 00:25:46.186 Atomic Compare & Write Unit: 1 00:25:46.186 Fused Compare & Write: Supported 00:25:46.186 Scatter-Gather List 00:25:46.186 SGL Command Set: Supported 00:25:46.186 SGL Keyed: Supported 00:25:46.186 SGL Bit Bucket Descriptor: Not Supported 00:25:46.186 SGL Metadata Pointer: Not Supported 00:25:46.186 Oversized SGL: Not Supported 00:25:46.186 SGL Metadata Address: Not Supported 00:25:46.186 SGL Offset: Supported 00:25:46.186 Transport SGL Data Block: Not Supported 00:25:46.186 Replay Protected Memory Block: Not Supported 00:25:46.186 00:25:46.186 Firmware Slot Information 00:25:46.186 ========================= 00:25:46.186 Active slot: 1 00:25:46.186 Slot 1 Firmware Revision: 25.01 00:25:46.186 00:25:46.186 00:25:46.186 Commands Supported and Effects 00:25:46.186 ============================== 00:25:46.186 Admin Commands 00:25:46.186 -------------- 00:25:46.186 Get Log Page (02h): Supported 00:25:46.186 Identify (06h): Supported 00:25:46.186 Abort (08h): Supported 00:25:46.186 Set Features (09h): Supported 00:25:46.186 Get Features (0Ah): Supported 00:25:46.186 Asynchronous Event Request (0Ch): Supported 00:25:46.186 Keep Alive (18h): Supported 00:25:46.186 I/O Commands 00:25:46.186 ------------ 00:25:46.186 Flush (00h): Supported LBA-Change 00:25:46.186 Write (01h): Supported LBA-Change 00:25:46.186 Read (02h): Supported 00:25:46.186 Compare (05h): Supported 00:25:46.186 Write Zeroes (08h): Supported LBA-Change 00:25:46.186 Dataset Management (09h): Supported LBA-Change 00:25:46.186 Copy (19h): Supported LBA-Change 00:25:46.186 00:25:46.186 Error Log 00:25:46.186 ========= 00:25:46.186 00:25:46.186 Arbitration 00:25:46.186 =========== 00:25:46.186 Arbitration Burst: 1 00:25:46.186 00:25:46.186 Power Management 00:25:46.186 ================ 00:25:46.186 Number of Power States: 1 00:25:46.186 Current Power State: Power State #0 00:25:46.186 Power State #0: 00:25:46.186 Max Power: 0.00 W 00:25:46.186 Non-Operational State: Operational 00:25:46.186 Entry Latency: Not Reported 00:25:46.186 Exit Latency: Not Reported 00:25:46.186 Relative Read Throughput: 0 00:25:46.186 Relative Read Latency: 0 00:25:46.186 Relative Write Throughput: 0 00:25:46.186 Relative Write Latency: 0 00:25:46.186 Idle Power: Not Reported 00:25:46.186 Active Power: Not Reported 00:25:46.186 Non-Operational Permissive Mode: Not Supported 00:25:46.186 00:25:46.186 Health Information 00:25:46.186 ================== 00:25:46.186 Critical Warnings: 00:25:46.186 Available Spare Space: OK 00:25:46.186 Temperature: OK 00:25:46.186 Device Reliability: OK 00:25:46.186 Read Only: No 00:25:46.186 Volatile Memory Backup: OK 00:25:46.186 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:46.186 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:46.186 Available Spare: 0% 00:25:46.186 Available Spare Threshold: 0% 00:25:46.186 Life Percentage Used:[2024-11-27 09:58:01.628882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.186 [2024-11-27 09:58:01.628888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e02690) 00:25:46.186 [2024-11-27 09:58:01.628895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.186 [2024-11-27 09:58:01.628907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64b80, cid 7, qid 0 00:25:46.186 [2024-11-27 09:58:01.629102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.186 [2024-11-27 09:58:01.629108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.186 [2024-11-27 09:58:01.629111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.186 [2024-11-27 09:58:01.629115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64b80) on tqpair=0x1e02690 00:25:46.186 [2024-11-27 09:58:01.629147] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:46.186 [2024-11-27 09:58:01.629164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64100) on tqpair=0x1e02690 00:25:46.186 [2024-11-27 09:58:01.629171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.186 [2024-11-27 09:58:01.629177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64280) on tqpair=0x1e02690 00:25:46.187 [2024-11-27 09:58:01.629184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.187 [2024-11-27 09:58:01.629189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64400) on tqpair=0x1e02690 00:25:46.187 [2024-11-27 09:58:01.629193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.187 [2024-11-27 09:58:01.629198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64580) on tqpair=0x1e02690 00:25:46.187 [2024-11-27 09:58:01.629203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.187 [2024-11-27 09:58:01.629211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.629215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.629219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e02690) 00:25:46.187 [2024-11-27 09:58:01.629226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.187 [2024-11-27 09:58:01.629239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64580, cid 3, qid 0 00:25:46.187 [2024-11-27 09:58:01.629455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.187 [2024-11-27 09:58:01.629461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.187 [2024-11-27 09:58:01.629464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.629468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64580) on tqpair=0x1e02690 00:25:46.187 [2024-11-27 09:58:01.629475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.629479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.629482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e02690) 00:25:46.187 [2024-11-27 09:58:01.629489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.187 [2024-11-27 09:58:01.629502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64580, cid 3, qid 0 00:25:46.187 [2024-11-27 09:58:01.629755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.187 [2024-11-27 09:58:01.629762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.187 [2024-11-27 09:58:01.629767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.629772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64580) on tqpair=0x1e02690 00:25:46.187 [2024-11-27 09:58:01.629777] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:46.187 [2024-11-27 09:58:01.629781] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:46.187 [2024-11-27 09:58:01.629791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.629796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.629799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e02690) 00:25:46.187 [2024-11-27 09:58:01.629806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.187 [2024-11-27 09:58:01.629816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64580, cid 3, qid 0 00:25:46.187 [2024-11-27 09:58:01.630008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.187 [2024-11-27 09:58:01.630015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.187 [2024-11-27 09:58:01.630018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.630022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64580) on tqpair=0x1e02690 00:25:46.187 [2024-11-27 09:58:01.630032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.630041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.630044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e02690) 00:25:46.187 [2024-11-27 09:58:01.630051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.187 [2024-11-27 09:58:01.630062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64580, cid 3, qid 0 00:25:46.187 [2024-11-27 09:58:01.634169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.187 [2024-11-27 09:58:01.634179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.187 [2024-11-27 09:58:01.634183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.634187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64580) on tqpair=0x1e02690 00:25:46.187 [2024-11-27 09:58:01.634198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.634202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.634206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e02690) 00:25:46.187 [2024-11-27 09:58:01.634213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.187 [2024-11-27 09:58:01.634225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e64580, cid 3, qid 0 00:25:46.187 [2024-11-27 09:58:01.634409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.187 [2024-11-27 09:58:01.634416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.187 [2024-11-27 09:58:01.634419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.187 [2024-11-27 09:58:01.634423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e64580) on tqpair=0x1e02690 00:25:46.187 [2024-11-27 09:58:01.634431] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:25:46.451 0% 00:25:46.451 Data Units Read: 0 00:25:46.451 Data Units Written: 0 00:25:46.451 Host Read Commands: 0 00:25:46.451 Host Write Commands: 0 00:25:46.451 Controller Busy Time: 0 minutes 00:25:46.451 Power Cycles: 0 00:25:46.451 Power On Hours: 0 hours 00:25:46.451 Unsafe Shutdowns: 0 00:25:46.451 Unrecoverable Media Errors: 0 00:25:46.451 Lifetime Error Log Entries: 0 00:25:46.451 Warning Temperature Time: 0 minutes 00:25:46.451 Critical Temperature Time: 0 minutes 00:25:46.451 00:25:46.451 Number of Queues 00:25:46.451 ================ 00:25:46.451 Number of I/O Submission Queues: 127 00:25:46.451 Number of I/O Completion Queues: 127 00:25:46.451 00:25:46.451 Active Namespaces 00:25:46.451 ================= 00:25:46.451 Namespace ID:1 00:25:46.451 Error Recovery Timeout: Unlimited 00:25:46.451 Command Set Identifier: NVM (00h) 00:25:46.451 Deallocate: Supported 00:25:46.451 Deallocated/Unwritten Error: Not Supported 00:25:46.451 Deallocated Read Value: Unknown 00:25:46.451 Deallocate in Write Zeroes: Not Supported 00:25:46.451 Deallocated Guard Field: 0xFFFF 00:25:46.451 Flush: Supported 00:25:46.451 Reservation: Supported 00:25:46.451 Namespace Sharing Capabilities: Multiple Controllers 00:25:46.451 Size (in LBAs): 131072 (0GiB) 00:25:46.451 Capacity (in LBAs): 131072 (0GiB) 00:25:46.451 Utilization (in LBAs): 131072 (0GiB) 00:25:46.451 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:46.451 EUI64: ABCDEF0123456789 00:25:46.451 UUID: e73b1d68-255a-472e-bec7-c17d89330ee1 00:25:46.451 Thin Provisioning: Not Supported 00:25:46.451 Per-NS Atomic Units: Yes 00:25:46.451 Atomic Boundary Size (Normal): 0 00:25:46.451 Atomic Boundary Size (PFail): 0 00:25:46.451 Atomic Boundary Offset: 0 00:25:46.451 Maximum Single Source Range Length: 65535 00:25:46.451 Maximum Copy Length: 65535 00:25:46.451 Maximum Source Range Count: 1 00:25:46.451 NGUID/EUI64 Never Reused: No 00:25:46.451 Namespace Write Protected: No 00:25:46.451 Number of LBA Formats: 1 00:25:46.451 Current LBA Format: LBA Format #00 00:25:46.451 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:46.451 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:46.451 rmmod nvme_tcp 00:25:46.451 rmmod nvme_fabrics 00:25:46.451 rmmod nvme_keyring 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3976566 ']' 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3976566 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3976566 ']' 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3976566 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3976566 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3976566' 00:25:46.451 killing process with pid 3976566 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3976566 00:25:46.451 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3976566 00:25:46.713 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:46.713 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:46.713 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:46.713 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:46.713 09:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:46.713 09:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:46.713 09:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:46.713 09:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:46.713 09:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:46.713 09:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.713 09:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.713 09:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.630 09:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:48.630 00:25:48.630 real 0m11.664s 00:25:48.630 user 0m8.599s 00:25:48.630 sys 0m6.209s 00:25:48.630 09:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:48.630 09:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:48.630 ************************************ 00:25:48.630 END TEST nvmf_identify 00:25:48.630 ************************************ 00:25:48.891 09:58:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:48.891 09:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:48.891 09:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:48.891 09:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.891 ************************************ 00:25:48.891 START TEST nvmf_perf 00:25:48.891 ************************************ 00:25:48.891 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:48.891 * Looking for test storage... 00:25:48.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.891 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:48.891 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:48.891 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:49.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.153 --rc genhtml_branch_coverage=1 00:25:49.153 --rc genhtml_function_coverage=1 00:25:49.153 --rc genhtml_legend=1 00:25:49.153 --rc geninfo_all_blocks=1 00:25:49.153 --rc geninfo_unexecuted_blocks=1 00:25:49.153 00:25:49.153 ' 00:25:49.153 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:49.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.153 --rc genhtml_branch_coverage=1 00:25:49.153 --rc genhtml_function_coverage=1 00:25:49.153 --rc genhtml_legend=1 00:25:49.153 --rc geninfo_all_blocks=1 00:25:49.153 --rc geninfo_unexecuted_blocks=1 00:25:49.153 00:25:49.153 ' 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:49.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.154 --rc genhtml_branch_coverage=1 00:25:49.154 --rc genhtml_function_coverage=1 00:25:49.154 --rc genhtml_legend=1 00:25:49.154 --rc geninfo_all_blocks=1 00:25:49.154 --rc geninfo_unexecuted_blocks=1 00:25:49.154 00:25:49.154 ' 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:49.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.154 --rc genhtml_branch_coverage=1 00:25:49.154 --rc genhtml_function_coverage=1 00:25:49.154 --rc genhtml_legend=1 00:25:49.154 --rc geninfo_all_blocks=1 00:25:49.154 --rc geninfo_unexecuted_blocks=1 00:25:49.154 00:25:49.154 ' 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:49.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:49.154 09:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:57.301 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:57.301 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:57.301 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:57.301 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.301 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:57.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:25:57.302 00:25:57.302 --- 10.0.0.2 ping statistics --- 00:25:57.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.302 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:25:57.302 00:25:57.302 --- 10.0.0.1 ping statistics --- 00:25:57.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.302 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3980923 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3980923 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3980923 ']' 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.302 09:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:57.302 [2024-11-27 09:58:12.006089] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:25:57.302 [2024-11-27 09:58:12.006171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.302 [2024-11-27 09:58:12.105398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:57.302 [2024-11-27 09:58:12.159103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.302 [2024-11-27 09:58:12.159154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.302 [2024-11-27 09:58:12.159175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.302 [2024-11-27 09:58:12.159183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.302 [2024-11-27 09:58:12.159189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.302 [2024-11-27 09:58:12.161202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.302 [2024-11-27 09:58:12.161364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.302 [2024-11-27 09:58:12.161525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.302 [2024-11-27 09:58:12.161525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.564 09:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:57.564 09:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:57.564 09:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:57.564 09:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:57.564 09:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:57.564 09:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.564 09:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:57.564 09:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:58.136 09:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:58.136 09:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:58.136 09:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:58.397 09:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:58.397 09:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:58.397 09:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:58.397 09:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:58.397 09:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:58.397 09:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:58.658 [2024-11-27 09:58:13.994070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.658 09:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:58.919 09:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:58.919 09:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:59.182 09:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:59.182 09:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:59.182 09:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.443 [2024-11-27 09:58:14.773086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.443 09:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:59.703 09:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:59.703 09:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:59.703 09:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:59.703 09:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:01.087 Initializing NVMe Controllers 00:26:01.087 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:01.087 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:01.087 Initialization complete. Launching workers. 00:26:01.087 ======================================================== 00:26:01.087 Latency(us) 00:26:01.087 Device Information : IOPS MiB/s Average min max 00:26:01.087 PCIE (0000:65:00.0) NSID 1 from core 0: 77859.13 304.14 410.25 13.32 5062.42 00:26:01.087 ======================================================== 00:26:01.087 Total : 77859.13 304.14 410.25 13.32 5062.42 00:26:01.087 00:26:01.087 09:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:02.472 Initializing NVMe Controllers 00:26:02.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:02.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:02.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:02.472 Initialization complete. Launching workers. 00:26:02.472 ======================================================== 00:26:02.472 Latency(us) 00:26:02.472 Device Information : IOPS MiB/s Average min max 00:26:02.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 102.00 0.40 9849.07 215.16 46278.70 00:26:02.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 17122.89 7233.14 47889.22 00:26:02.472 ======================================================== 00:26:02.472 Total : 163.00 0.64 12571.18 215.16 47889.22 00:26:02.472 00:26:02.472 09:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:03.854 Initializing NVMe Controllers 00:26:03.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:03.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:03.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:03.854 Initialization complete. Launching workers. 00:26:03.854 ======================================================== 00:26:03.854 Latency(us) 00:26:03.854 Device Information : IOPS MiB/s Average min max 00:26:03.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11827.00 46.20 2707.33 450.28 6761.77 00:26:03.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3830.00 14.96 8400.57 6494.18 15934.31 00:26:03.854 ======================================================== 00:26:03.854 Total : 15657.00 61.16 4100.00 450.28 15934.31 00:26:03.854 00:26:03.854 09:58:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:03.854 09:58:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:03.854 09:58:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:06.414 Initializing NVMe Controllers 00:26:06.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:06.414 Controller IO queue size 128, less than required. 00:26:06.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:06.414 Controller IO queue size 128, less than required. 00:26:06.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:06.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:06.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:06.414 Initialization complete. Launching workers. 00:26:06.414 ======================================================== 00:26:06.414 Latency(us) 00:26:06.414 Device Information : IOPS MiB/s Average min max 00:26:06.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1780.89 445.22 73247.48 39917.05 125124.09 00:26:06.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 616.75 154.19 221698.36 57268.12 314234.56 00:26:06.415 ======================================================== 00:26:06.415 Total : 2397.64 599.41 111433.80 39917.05 314234.56 00:26:06.415 00:26:06.415 09:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:06.676 No valid NVMe controllers or AIO or URING devices found 00:26:06.676 Initializing NVMe Controllers 00:26:06.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:06.676 Controller IO queue size 128, less than required. 00:26:06.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:06.676 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:06.676 Controller IO queue size 128, less than required. 00:26:06.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:06.676 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:06.676 WARNING: Some requested NVMe devices were skipped 00:26:06.676 09:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:09.264 Initializing NVMe Controllers 00:26:09.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:09.264 Controller IO queue size 128, less than required. 00:26:09.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:09.264 Controller IO queue size 128, less than required. 00:26:09.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:09.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:09.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:09.264 Initialization complete. Launching workers. 00:26:09.264 00:26:09.264 ==================== 00:26:09.264 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:09.264 TCP transport: 00:26:09.264 polls: 38428 00:26:09.264 idle_polls: 23868 00:26:09.264 sock_completions: 14560 00:26:09.264 nvme_completions: 7501 00:26:09.264 submitted_requests: 11228 00:26:09.264 queued_requests: 1 00:26:09.264 00:26:09.264 ==================== 00:26:09.264 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:09.264 TCP transport: 00:26:09.264 polls: 42073 00:26:09.264 idle_polls: 27748 00:26:09.264 sock_completions: 14325 00:26:09.264 nvme_completions: 7579 00:26:09.264 submitted_requests: 11358 00:26:09.264 queued_requests: 1 00:26:09.264 ======================================================== 00:26:09.264 Latency(us) 00:26:09.264 Device Information : IOPS MiB/s Average min max 00:26:09.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1875.00 468.75 70194.39 42499.60 123629.34 00:26:09.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1894.50 473.62 69075.76 29042.08 136646.52 00:26:09.264 ======================================================== 00:26:09.264 Total : 3769.50 942.37 69632.19 29042.08 136646.52 00:26:09.264 00:26:09.264 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:09.264 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:09.264 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:09.264 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:09.264 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:09.264 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:09.264 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:26:09.264 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:09.264 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:26:09.264 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:09.264 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:09.264 rmmod nvme_tcp 00:26:09.264 rmmod nvme_fabrics 00:26:09.526 rmmod nvme_keyring 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3980923 ']' 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3980923 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3980923 ']' 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3980923 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3980923 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3980923' 00:26:09.526 killing process with pid 3980923 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3980923 00:26:09.526 09:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3980923 00:26:11.438 09:58:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:11.438 09:58:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:11.438 09:58:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:11.438 09:58:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:26:11.438 09:58:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:26:11.438 09:58:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:11.438 09:58:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:26:11.438 09:58:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:11.438 09:58:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:11.438 09:58:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.438 09:58:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.438 09:58:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.982 09:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:13.982 00:26:13.982 real 0m24.681s 00:26:13.982 user 0m59.910s 00:26:13.982 sys 0m8.713s 00:26:13.982 09:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:13.982 09:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:13.982 ************************************ 00:26:13.982 END TEST nvmf_perf 00:26:13.982 ************************************ 00:26:13.982 09:58:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:13.982 09:58:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:13.982 09:58:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:13.982 09:58:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.982 ************************************ 00:26:13.982 START TEST nvmf_fio_host 00:26:13.982 ************************************ 00:26:13.982 09:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:13.982 * Looking for test storage... 00:26:13.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:13.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.982 --rc genhtml_branch_coverage=1 00:26:13.982 --rc genhtml_function_coverage=1 00:26:13.982 --rc genhtml_legend=1 00:26:13.982 --rc geninfo_all_blocks=1 00:26:13.982 --rc geninfo_unexecuted_blocks=1 00:26:13.982 00:26:13.982 ' 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:13.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.982 --rc genhtml_branch_coverage=1 00:26:13.982 --rc genhtml_function_coverage=1 00:26:13.982 --rc genhtml_legend=1 00:26:13.982 --rc geninfo_all_blocks=1 00:26:13.982 --rc geninfo_unexecuted_blocks=1 00:26:13.982 00:26:13.982 ' 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:13.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.982 --rc genhtml_branch_coverage=1 00:26:13.982 --rc genhtml_function_coverage=1 00:26:13.982 --rc genhtml_legend=1 00:26:13.982 --rc geninfo_all_blocks=1 00:26:13.982 --rc geninfo_unexecuted_blocks=1 00:26:13.982 00:26:13.982 ' 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:13.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.982 --rc genhtml_branch_coverage=1 00:26:13.982 --rc genhtml_function_coverage=1 00:26:13.982 --rc genhtml_legend=1 00:26:13.982 --rc geninfo_all_blocks=1 00:26:13.982 --rc geninfo_unexecuted_blocks=1 00:26:13.982 00:26:13.982 ' 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.982 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:13.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:13.983 09:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:22.130 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:22.130 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:22.130 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:22.131 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:22.131 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:22.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:26:22.131 00:26:22.131 --- 10.0.0.2 ping statistics --- 00:26:22.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.131 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:26:22.131 00:26:22.131 --- 10.0.0.1 ping statistics --- 00:26:22.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.131 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3987988 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3987988 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3987988 ']' 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:22.131 09:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.131 [2024-11-27 09:58:36.726972] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:26:22.131 [2024-11-27 09:58:36.727035] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.131 [2024-11-27 09:58:36.827484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:22.131 [2024-11-27 09:58:36.880678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.131 [2024-11-27 09:58:36.880731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.131 [2024-11-27 09:58:36.880739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.131 [2024-11-27 09:58:36.880746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.131 [2024-11-27 09:58:36.880753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.131 [2024-11-27 09:58:36.883054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.131 [2024-11-27 09:58:36.883228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.131 [2024-11-27 09:58:36.883329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:22.131 [2024-11-27 09:58:36.883331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.131 09:58:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.131 09:58:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:26:22.131 09:58:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:22.392 [2024-11-27 09:58:37.723983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.392 09:58:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:22.392 09:58:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:22.392 09:58:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.392 09:58:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:22.653 Malloc1 00:26:22.653 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:22.915 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:23.177 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:23.177 [2024-11-27 09:58:38.585045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.177 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:23.438 09:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:24.011 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:24.011 fio-3.35 00:26:24.011 Starting 1 thread 00:26:26.548 00:26:26.548 test: (groupid=0, jobs=1): err= 0: pid=3988842: Wed Nov 27 09:58:41 2024 00:26:26.548 read: IOPS=13.8k, BW=53.9MiB/s (56.6MB/s)(108MiB/2005msec) 00:26:26.548 slat (usec): min=2, max=310, avg= 2.14, stdev= 2.59 00:26:26.548 clat (usec): min=3332, max=8503, avg=5090.44, stdev=356.55 00:26:26.548 lat (usec): min=3334, max=8505, avg=5092.58, stdev=356.58 00:26:26.548 clat percentiles (usec): 00:26:26.548 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:26:26.548 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:26:26.548 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5669], 00:26:26.548 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 7111], 99.95th=[ 7832], 00:26:26.548 | 99.99th=[ 8455] 00:26:26.548 bw ( KiB/s): min=53880, max=55928, per=100.00%, avg=55258.00, stdev=932.73, samples=4 00:26:26.548 iops : min=13470, max=13982, avg=13814.50, stdev=233.18, samples=4 00:26:26.548 write: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2005msec); 0 zone resets 00:26:26.548 slat (usec): min=2, max=194, avg= 2.21, stdev= 1.36 00:26:26.548 clat (usec): min=2666, max=8432, avg=4116.52, stdev=305.74 00:26:26.548 lat (usec): min=2684, max=8434, avg=4118.73, stdev=305.77 00:26:26.548 clat percentiles (usec): 00:26:26.548 | 1.00th=[ 3425], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3884], 00:26:26.548 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:26:26.548 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4555], 00:26:26.548 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 6390], 99.95th=[ 7373], 00:26:26.548 | 99.99th=[ 8356] 00:26:26.548 bw ( KiB/s): min=54160, max=55680, per=99.99%, avg=55196.00, stdev=708.55, samples=4 00:26:26.548 iops : min=13540, max=13920, avg=13799.00, stdev=177.14, samples=4 00:26:26.548 lat (msec) : 4=16.60%, 10=83.40% 00:26:26.548 cpu : usr=73.70%, sys=25.10%, ctx=23, majf=0, minf=17 00:26:26.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:26.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:26.548 issued rwts: total=27684,27670,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:26.548 00:26:26.548 Run status group 0 (all jobs): 00:26:26.548 READ: bw=53.9MiB/s (56.6MB/s), 53.9MiB/s-53.9MiB/s (56.6MB/s-56.6MB/s), io=108MiB (113MB), run=2005-2005msec 00:26:26.548 WRITE: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2005-2005msec 00:26:26.548 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:26.548 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:26.548 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:26.548 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:26.549 09:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:26.549 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:26.549 fio-3.35 00:26:26.549 Starting 1 thread 00:26:29.090 00:26:29.090 test: (groupid=0, jobs=1): err= 0: pid=3989371: Wed Nov 27 09:58:44 2024 00:26:29.090 read: IOPS=9396, BW=147MiB/s (154MB/s)(300MiB/2045msec) 00:26:29.090 slat (usec): min=3, max=114, avg= 3.61, stdev= 1.62 00:26:29.090 clat (usec): min=1484, max=50509, avg=8184.79, stdev=3085.63 00:26:29.090 lat (usec): min=1488, max=50513, avg=8188.40, stdev=3085.72 00:26:29.090 clat percentiles (usec): 00:26:29.090 | 1.00th=[ 4080], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6325], 00:26:29.090 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 8029], 60.00th=[ 8586], 00:26:29.090 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[10945], 00:26:29.090 | 99.00th=[12911], 99.50th=[14091], 99.90th=[49021], 99.95th=[49546], 00:26:29.090 | 99.99th=[50594] 00:26:29.090 bw ( KiB/s): min=68416, max=85536, per=51.11%, avg=76832.00, stdev=7070.82, samples=4 00:26:29.090 iops : min= 4276, max= 5346, avg=4802.00, stdev=441.93, samples=4 00:26:29.090 write: IOPS=5480, BW=85.6MiB/s (89.8MB/s)(157MiB/1833msec); 0 zone resets 00:26:29.090 slat (usec): min=39, max=448, avg=40.94, stdev= 8.33 00:26:29.090 clat (usec): min=3298, max=50748, avg=9310.77, stdev=3105.17 00:26:29.090 lat (usec): min=3338, max=50788, avg=9351.71, stdev=3106.09 00:26:29.090 clat percentiles (usec): 00:26:29.090 | 1.00th=[ 6390], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7898], 00:26:29.090 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:26:29.090 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:26:29.091 | 99.00th=[13698], 99.50th=[16319], 99.90th=[50070], 99.95th=[50070], 00:26:29.091 | 99.99th=[50594] 00:26:29.091 bw ( KiB/s): min=71648, max=89088, per=91.19%, avg=79968.00, stdev=7194.85, samples=4 00:26:29.091 iops : min= 4478, max= 5568, avg=4998.00, stdev=449.68, samples=4 00:26:29.091 lat (msec) : 2=0.03%, 4=0.54%, 10=79.58%, 20=19.41%, 50=0.38% 00:26:29.091 lat (msec) : 100=0.05% 00:26:29.091 cpu : usr=86.55%, sys=12.37%, ctx=16, majf=0, minf=27 00:26:29.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:29.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:29.091 issued rwts: total=19215,10046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:29.091 00:26:29.091 Run status group 0 (all jobs): 00:26:29.091 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=300MiB (315MB), run=2045-2045msec 00:26:29.091 WRITE: bw=85.6MiB/s (89.8MB/s), 85.6MiB/s-85.6MiB/s (89.8MB/s-89.8MB/s), io=157MiB (165MB), run=1833-1833msec 00:26:29.091 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:29.351 rmmod nvme_tcp 00:26:29.351 rmmod nvme_fabrics 00:26:29.351 rmmod nvme_keyring 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3987988 ']' 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3987988 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3987988 ']' 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3987988 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3987988 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3987988' 00:26:29.351 killing process with pid 3987988 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3987988 00:26:29.351 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3987988 00:26:29.612 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:29.612 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:29.612 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:29.612 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:29.612 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:26:29.612 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:29.612 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:29.612 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:29.612 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:29.612 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.612 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.612 09:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.155 09:58:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:32.155 00:26:32.155 real 0m18.070s 00:26:32.155 user 1m12.013s 00:26:32.155 sys 0m7.570s 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.155 ************************************ 00:26:32.155 END TEST nvmf_fio_host 00:26:32.155 ************************************ 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.155 ************************************ 00:26:32.155 START TEST nvmf_failover 00:26:32.155 ************************************ 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:32.155 * Looking for test storage... 00:26:32.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:32.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.155 --rc genhtml_branch_coverage=1 00:26:32.155 --rc genhtml_function_coverage=1 00:26:32.155 --rc genhtml_legend=1 00:26:32.155 --rc geninfo_all_blocks=1 00:26:32.155 --rc geninfo_unexecuted_blocks=1 00:26:32.155 00:26:32.155 ' 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:32.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.155 --rc genhtml_branch_coverage=1 00:26:32.155 --rc genhtml_function_coverage=1 00:26:32.155 --rc genhtml_legend=1 00:26:32.155 --rc geninfo_all_blocks=1 00:26:32.155 --rc geninfo_unexecuted_blocks=1 00:26:32.155 00:26:32.155 ' 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:32.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.155 --rc genhtml_branch_coverage=1 00:26:32.155 --rc genhtml_function_coverage=1 00:26:32.155 --rc genhtml_legend=1 00:26:32.155 --rc geninfo_all_blocks=1 00:26:32.155 --rc geninfo_unexecuted_blocks=1 00:26:32.155 00:26:32.155 ' 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:32.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.155 --rc genhtml_branch_coverage=1 00:26:32.155 --rc genhtml_function_coverage=1 00:26:32.155 --rc genhtml_legend=1 00:26:32.155 --rc geninfo_all_blocks=1 00:26:32.155 --rc geninfo_unexecuted_blocks=1 00:26:32.155 00:26:32.155 ' 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.155 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:32.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:26:32.156 09:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:40.295 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:40.295 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.295 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:40.296 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:40.296 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:40.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:26:40.296 00:26:40.296 --- 10.0.0.2 ping statistics --- 00:26:40.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.296 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:26:40.296 00:26:40.296 --- 10.0.0.1 ping statistics --- 00:26:40.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.296 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3994037 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3994037 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3994037 ']' 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.296 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.297 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.297 09:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:40.297 [2024-11-27 09:58:54.669147] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:26:40.297 [2024-11-27 09:58:54.669218] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.297 [2024-11-27 09:58:54.766517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:40.297 [2024-11-27 09:58:54.798454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.297 [2024-11-27 09:58:54.798487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.297 [2024-11-27 09:58:54.798494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.297 [2024-11-27 09:58:54.798499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.297 [2024-11-27 09:58:54.798504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.297 [2024-11-27 09:58:54.799845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.297 [2024-11-27 09:58:54.799995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.297 [2024-11-27 09:58:54.799997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:40.297 09:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.297 09:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:40.297 09:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:40.297 09:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:40.297 09:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:40.297 09:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.297 09:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:40.297 [2024-11-27 09:58:55.673652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.297 09:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:40.557 Malloc0 00:26:40.557 09:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.817 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:40.817 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:41.078 [2024-11-27 09:58:56.398229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.078 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:41.339 [2024-11-27 09:58:56.582675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:41.339 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:41.339 [2024-11-27 09:58:56.767214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:41.339 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3994603 00:26:41.339 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:41.339 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.339 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3994603 /var/tmp/bdevperf.sock 00:26:41.339 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3994603 ']' 00:26:41.339 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:41.339 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.339 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:41.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:41.339 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.339 09:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:42.282 09:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.282 09:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:42.282 09:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:42.542 NVMe0n1 00:26:42.542 09:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:42.803 00:26:42.803 09:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3994790 00:26:42.803 09:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:42.803 09:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:44.189 09:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.189 09:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:47.489 09:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:47.489 00:26:47.489 09:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:47.489 [2024-11-27 09:59:02.839941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.839994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.489 [2024-11-27 09:59:02.840070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 [2024-11-27 09:59:02.840192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f43040 is same with the state(6) to be set 00:26:47.490 09:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:50.790 09:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.790 [2024-11-27 09:59:06.025656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.790 09:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:51.732 09:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:51.992 [2024-11-27 09:59:07.215220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 [2024-11-27 09:59:07.215357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e084c0 is same with the state(6) to be set 00:26:51.992 09:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3994790 00:26:58.708 { 00:26:58.708 "results": [ 00:26:58.708 { 00:26:58.708 "job": "NVMe0n1", 00:26:58.708 "core_mask": "0x1", 00:26:58.708 "workload": "verify", 00:26:58.708 "status": "finished", 00:26:58.708 "verify_range": { 00:26:58.708 "start": 0, 00:26:58.708 "length": 16384 00:26:58.708 }, 00:26:58.708 "queue_depth": 128, 00:26:58.708 "io_size": 4096, 00:26:58.708 "runtime": 15.005744, 00:26:58.708 "iops": 12599.441920373958, 00:26:58.708 "mibps": 49.216570001460774, 00:26:58.708 "io_failed": 9717, 00:26:58.708 "io_timeout": 0, 00:26:58.708 "avg_latency_us": 9642.292563574989, 00:26:58.708 "min_latency_us": 535.8933333333333, 00:26:58.708 "max_latency_us": 18896.213333333333 00:26:58.708 } 00:26:58.708 ], 00:26:58.708 "core_count": 1 00:26:58.708 } 00:26:58.708 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3994603 00:26:58.708 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3994603 ']' 00:26:58.708 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3994603 00:26:58.708 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:58.708 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:58.708 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3994603 00:26:58.708 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:58.708 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:58.708 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3994603' 00:26:58.708 killing process with pid 3994603 00:26:58.708 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3994603 00:26:58.708 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3994603 00:26:58.708 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:58.708 [2024-11-27 09:58:56.848470] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:26:58.708 [2024-11-27 09:58:56.848529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3994603 ] 00:26:58.708 [2024-11-27 09:58:56.939580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.708 [2024-11-27 09:58:56.975985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.708 Running I/O for 15 seconds... 00:26:58.708 11704.00 IOPS, 45.72 MiB/s [2024-11-27T08:59:14.174Z] [2024-11-27 09:58:59.380355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.708 [2024-11-27 09:58:59.380698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.708 [2024-11-27 09:58:59.380707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.380714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.380731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.380751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.380768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.380785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.380806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.380827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.380845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.380862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.380879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.380896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.709 [2024-11-27 09:58:59.380914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.709 [2024-11-27 09:58:59.380931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.709 [2024-11-27 09:58:59.380948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.709 [2024-11-27 09:58:59.380965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.709 [2024-11-27 09:58:59.380982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.380992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.709 [2024-11-27 09:58:59.380999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.709 [2024-11-27 09:58:59.381016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.709 [2024-11-27 09:58:59.381353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.709 [2024-11-27 09:58:59.381360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.710 [2024-11-27 09:58:59.381378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.710 [2024-11-27 09:58:59.381395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.710 [2024-11-27 09:58:59.381412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.710 [2024-11-27 09:58:59.381428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.710 [2024-11-27 09:58:59.381445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.710 [2024-11-27 09:58:59.381583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.710 [2024-11-27 09:58:59.381599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.710 [2024-11-27 09:58:59.381616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.710 [2024-11-27 09:58:59.381633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.710 [2024-11-27 09:58:59.381649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.710 [2024-11-27 09:58:59.381862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.710 [2024-11-27 09:58:59.381869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.381879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.381887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.381897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.381905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.381915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.381922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.381931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.381939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.381948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.381956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.381965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.381972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.381981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.381989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.381999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.711 [2024-11-27 09:58:59.382537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.711 [2024-11-27 09:58:59.382549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.712 [2024-11-27 09:58:59.382556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:58:59.382565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.712 [2024-11-27 09:58:59.382572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:58:59.382582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.712 [2024-11-27 09:58:59.382589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:58:59.382597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce050 is same with the state(6) to be set 00:26:58.712 [2024-11-27 09:58:59.382606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.712 [2024-11-27 09:58:59.382612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.712 [2024-11-27 09:58:59.382619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100496 len:8 PRP1 0x0 PRP2 0x0 00:26:58.712 [2024-11-27 09:58:59.382626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:58:59.382666] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:58.712 [2024-11-27 09:58:59.382688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.712 [2024-11-27 09:58:59.382697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:58:59.382705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.712 [2024-11-27 09:58:59.382714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:58:59.382722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.712 [2024-11-27 09:58:59.382729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:58:59.382737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.712 [2024-11-27 09:58:59.382744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:58:59.382752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:58.712 [2024-11-27 09:58:59.386301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:58.712 [2024-11-27 09:58:59.386325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21acd90 (9): Bad file descriptor 00:26:58.712 [2024-11-27 09:58:59.547230] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:58.712 10520.00 IOPS, 41.09 MiB/s [2024-11-27T08:59:14.178Z] 11365.67 IOPS, 44.40 MiB/s [2024-11-27T08:59:14.178Z] 11766.25 IOPS, 45.96 MiB/s [2024-11-27T08:59:14.178Z] [2024-11-27 09:59:02.840781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.712 [2024-11-27 09:59:02.840812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.840994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.840999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.841006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.841011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.841017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.841022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.841029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.841034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.841040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.841045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.841051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.841056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.841063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.841069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.841075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.841080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.712 [2024-11-27 09:59:02.841086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.712 [2024-11-27 09:59:02.841091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.713 [2024-11-27 09:59:02.841228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.713 [2024-11-27 09:59:02.841240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.713 [2024-11-27 09:59:02.841252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.713 [2024-11-27 09:59:02.841264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.713 [2024-11-27 09:59:02.841423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.713 [2024-11-27 09:59:02.841430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.714 [2024-11-27 09:59:02.841876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.714 [2024-11-27 09:59:02.841881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.841887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.715 [2024-11-27 09:59:02.841893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.841899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.715 [2024-11-27 09:59:02.841904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.841910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.715 [2024-11-27 09:59:02.841915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.841921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.715 [2024-11-27 09:59:02.841926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.841933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.715 [2024-11-27 09:59:02.841938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.841945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.715 [2024-11-27 09:59:02.841950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.841956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.715 [2024-11-27 09:59:02.841961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.841968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.715 [2024-11-27 09:59:02.841973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.841979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.715 [2024-11-27 09:59:02.841985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.841991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.715 [2024-11-27 09:59:02.841996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89392 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89400 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89408 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89416 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89424 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89432 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89440 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89448 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89456 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89464 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89472 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89480 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88504 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88512 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.715 [2024-11-27 09:59:02.842281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.715 [2024-11-27 09:59:02.842286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88520 len:8 PRP1 0x0 PRP2 0x0 00:26:58.715 [2024-11-27 09:59:02.842292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.715 [2024-11-27 09:59:02.842297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.716 [2024-11-27 09:59:02.842301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.716 [2024-11-27 09:59:02.842306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88528 len:8 PRP1 0x0 PRP2 0x0 00:26:58.716 [2024-11-27 09:59:02.842311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.842316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.716 [2024-11-27 09:59:02.842321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.716 [2024-11-27 09:59:02.842325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88536 len:8 PRP1 0x0 PRP2 0x0 00:26:58.716 [2024-11-27 09:59:02.842331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.842336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.716 [2024-11-27 09:59:02.842340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.716 [2024-11-27 09:59:02.842345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88544 len:8 PRP1 0x0 PRP2 0x0 00:26:58.716 [2024-11-27 09:59:02.842350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.842356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.716 [2024-11-27 09:59:02.842360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.716 [2024-11-27 09:59:02.842364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88552 len:8 PRP1 0x0 PRP2 0x0 00:26:58.716 [2024-11-27 09:59:02.842369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.842374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.716 [2024-11-27 09:59:02.842379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.716 [2024-11-27 09:59:02.854676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88560 len:8 PRP1 0x0 PRP2 0x0 00:26:58.716 [2024-11-27 09:59:02.854702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.854714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.716 [2024-11-27 09:59:02.854718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.716 [2024-11-27 09:59:02.854723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88568 len:8 PRP1 0x0 PRP2 0x0 00:26:58.716 [2024-11-27 09:59:02.854728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.854734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.716 [2024-11-27 09:59:02.854737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.716 [2024-11-27 09:59:02.854742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88576 len:8 PRP1 0x0 PRP2 0x0 00:26:58.716 [2024-11-27 09:59:02.854747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.854754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.716 [2024-11-27 09:59:02.854758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.716 [2024-11-27 09:59:02.854763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88584 len:8 PRP1 0x0 PRP2 0x0 00:26:58.716 [2024-11-27 09:59:02.854767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.854773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.716 [2024-11-27 09:59:02.854777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.716 [2024-11-27 09:59:02.854781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88592 len:8 PRP1 0x0 PRP2 0x0 00:26:58.716 [2024-11-27 09:59:02.854787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.854799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.716 [2024-11-27 09:59:02.854803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.716 [2024-11-27 09:59:02.854807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88600 len:8 PRP1 0x0 PRP2 0x0 00:26:58.716 [2024-11-27 09:59:02.854812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.854817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.716 [2024-11-27 09:59:02.854821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.716 [2024-11-27 09:59:02.854826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88608 len:8 PRP1 0x0 PRP2 0x0 00:26:58.716 [2024-11-27 09:59:02.854831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.854837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.716 [2024-11-27 09:59:02.854841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.716 [2024-11-27 09:59:02.854845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88616 len:8 PRP1 0x0 PRP2 0x0 00:26:58.716 [2024-11-27 09:59:02.854850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.854885] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:58.716 [2024-11-27 09:59:02.854907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.716 [2024-11-27 09:59:02.854914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.854922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.716 [2024-11-27 09:59:02.854927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.854932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.716 [2024-11-27 09:59:02.854937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.854943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.716 [2024-11-27 09:59:02.854949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:02.854954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:58.716 [2024-11-27 09:59:02.854978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21acd90 (9): Bad file descriptor 00:26:58.716 [2024-11-27 09:59:02.857408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:58.716 [2024-11-27 09:59:02.891524] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:58.716 11885.00 IOPS, 46.43 MiB/s [2024-11-27T08:59:14.182Z] 12080.67 IOPS, 47.19 MiB/s [2024-11-27T08:59:14.182Z] 12227.86 IOPS, 47.77 MiB/s [2024-11-27T08:59:14.182Z] 12344.88 IOPS, 48.22 MiB/s [2024-11-27T08:59:14.182Z] [2024-11-27 09:59:07.216493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.716 [2024-11-27 09:59:07.216524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:07.216541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.716 [2024-11-27 09:59:07.216548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:07.216555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.716 [2024-11-27 09:59:07.216560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:07.216567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.716 [2024-11-27 09:59:07.216572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:07.216579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.716 [2024-11-27 09:59:07.216584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:07.216591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.716 [2024-11-27 09:59:07.216596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:07.216602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.716 [2024-11-27 09:59:07.216608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:07.216614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.716 [2024-11-27 09:59:07.216619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.716 [2024-11-27 09:59:07.216626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.717 [2024-11-27 09:59:07.216873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.216989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.216994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.217000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.217005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.217012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.217017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.217023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.217028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.717 [2024-11-27 09:59:07.217035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.717 [2024-11-27 09:59:07.217039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.718 [2024-11-27 09:59:07.217437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.718 [2024-11-27 09:59:07.217443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.719 [2024-11-27 09:59:07.217897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.719 [2024-11-27 09:59:07.217905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.720 [2024-11-27 09:59:07.217910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.217916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.720 [2024-11-27 09:59:07.217921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.217927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.720 [2024-11-27 09:59:07.217933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.217939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.720 [2024-11-27 09:59:07.217944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.217950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.720 [2024-11-27 09:59:07.217955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.217962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.720 [2024-11-27 09:59:07.217966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.217973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.720 [2024-11-27 09:59:07.217978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.217984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.720 [2024-11-27 09:59:07.217989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.217996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.720 [2024-11-27 09:59:07.218001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.218007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.720 [2024-11-27 09:59:07.218013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.218029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.720 [2024-11-27 09:59:07.218034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29944 len:8 PRP1 0x0 PRP2 0x0 00:26:58.720 [2024-11-27 09:59:07.218039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.218047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.720 [2024-11-27 09:59:07.218051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.720 [2024-11-27 09:59:07.218055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29952 len:8 PRP1 0x0 PRP2 0x0 00:26:58.720 [2024-11-27 09:59:07.218062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.218068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:58.720 [2024-11-27 09:59:07.218072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:58.720 [2024-11-27 09:59:07.218077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29960 len:8 PRP1 0x0 PRP2 0x0 00:26:58.720 [2024-11-27 09:59:07.218082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.218115] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:58.720 [2024-11-27 09:59:07.218132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.720 [2024-11-27 09:59:07.218137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.218145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.720 [2024-11-27 09:59:07.218151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.218160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.720 [2024-11-27 09:59:07.218166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.218172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.720 [2024-11-27 09:59:07.218177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.720 [2024-11-27 09:59:07.218183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:58.720 [2024-11-27 09:59:07.218210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21acd90 (9): Bad file descriptor 00:26:58.720 [2024-11-27 09:59:07.220598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:58.720 [2024-11-27 09:59:07.244246] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:58.720 12363.11 IOPS, 48.29 MiB/s [2024-11-27T08:59:14.186Z] 12457.80 IOPS, 48.66 MiB/s [2024-11-27T08:59:14.186Z] 12490.45 IOPS, 48.79 MiB/s [2024-11-27T08:59:14.186Z] 12529.33 IOPS, 48.94 MiB/s [2024-11-27T08:59:14.186Z] 12553.85 IOPS, 49.04 MiB/s [2024-11-27T08:59:14.186Z] 12582.21 IOPS, 49.15 MiB/s 00:26:58.720 Latency(us) 00:26:58.720 [2024-11-27T08:59:14.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.720 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:58.720 Verification LBA range: start 0x0 length 0x4000 00:26:58.720 NVMe0n1 : 15.01 12599.44 49.22 647.55 0.00 9642.29 535.89 18896.21 00:26:58.720 [2024-11-27T08:59:14.186Z] =================================================================================================================== 00:26:58.720 [2024-11-27T08:59:14.186Z] Total : 12599.44 49.22 647.55 0.00 9642.29 535.89 18896.21 00:26:58.720 Received shutdown signal, test time was about 15.000000 seconds 00:26:58.720 00:26:58.720 Latency(us) 00:26:58.720 [2024-11-27T08:59:14.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.720 [2024-11-27T08:59:14.186Z] =================================================================================================================== 00:26:58.720 [2024-11-27T08:59:14.186Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.720 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:58.720 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:58.720 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:58.720 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3997735 00:26:58.720 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3997735 /var/tmp/bdevperf.sock 00:26:58.720 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:58.720 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3997735 ']' 00:26:58.720 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:58.720 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.720 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:58.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:58.720 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.720 09:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:58.981 09:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.981 09:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:58.981 09:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:59.243 [2024-11-27 09:59:14.544502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:59.243 09:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:59.505 [2024-11-27 09:59:14.728950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:59.505 09:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:59.767 NVMe0n1 00:26:59.767 09:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:59.767 00:27:00.028 09:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:00.290 00:27:00.290 09:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:00.290 09:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:00.552 09:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:00.552 09:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:03.850 09:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:03.850 09:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:03.850 09:59:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3998882 00:27:03.850 09:59:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:03.850 09:59:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3998882 00:27:04.809 { 00:27:04.809 "results": [ 00:27:04.809 { 00:27:04.809 "job": "NVMe0n1", 00:27:04.809 "core_mask": "0x1", 00:27:04.809 "workload": "verify", 00:27:04.809 "status": "finished", 00:27:04.809 "verify_range": { 00:27:04.809 "start": 0, 00:27:04.809 "length": 16384 00:27:04.809 }, 00:27:04.809 "queue_depth": 128, 00:27:04.809 "io_size": 4096, 00:27:04.809 "runtime": 1.004988, 00:27:04.809 "iops": 12797.167727375849, 00:27:04.809 "mibps": 49.98893643506191, 00:27:04.809 "io_failed": 0, 00:27:04.809 "io_timeout": 0, 00:27:04.809 "avg_latency_us": 9952.339150403028, 00:27:04.809 "min_latency_us": 1413.12, 00:27:04.809 "max_latency_us": 13707.946666666667 00:27:04.809 } 00:27:04.809 ], 00:27:04.809 "core_count": 1 00:27:04.809 } 00:27:05.070 09:59:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:05.070 [2024-11-27 09:59:13.609686] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:27:05.070 [2024-11-27 09:59:13.609761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997735 ] 00:27:05.070 [2024-11-27 09:59:13.695169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.070 [2024-11-27 09:59:13.723065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.070 [2024-11-27 09:59:15.951566] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:05.070 [2024-11-27 09:59:15.951603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.070 [2024-11-27 09:59:15.951612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.070 [2024-11-27 09:59:15.951619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.070 [2024-11-27 09:59:15.951624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.070 [2024-11-27 09:59:15.951630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.070 [2024-11-27 09:59:15.951636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.070 [2024-11-27 09:59:15.951641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.070 [2024-11-27 09:59:15.951646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.070 [2024-11-27 09:59:15.951652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:27:05.070 [2024-11-27 09:59:15.951672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:27:05.070 [2024-11-27 09:59:15.951683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1795d90 (9): Bad file descriptor 00:27:05.070 [2024-11-27 09:59:15.960102] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:27:05.070 Running I/O for 1 seconds... 00:27:05.070 12728.00 IOPS, 49.72 MiB/s 00:27:05.070 Latency(us) 00:27:05.070 [2024-11-27T08:59:20.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.070 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:05.070 Verification LBA range: start 0x0 length 0x4000 00:27:05.070 NVMe0n1 : 1.00 12797.17 49.99 0.00 0.00 9952.34 1413.12 13707.95 00:27:05.070 [2024-11-27T08:59:20.536Z] =================================================================================================================== 00:27:05.070 [2024-11-27T08:59:20.536Z] Total : 12797.17 49.99 0.00 0.00 9952.34 1413.12 13707.95 00:27:05.070 09:59:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:05.070 09:59:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:05.070 09:59:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:05.331 09:59:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:05.331 09:59:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:05.591 09:59:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:05.591 09:59:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:08.892 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:08.892 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:08.892 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3997735 00:27:08.892 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3997735 ']' 00:27:08.892 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3997735 00:27:08.892 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:08.892 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:08.892 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3997735 00:27:08.892 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:08.892 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:08.892 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3997735' 00:27:08.892 killing process with pid 3997735 00:27:08.892 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3997735 00:27:08.892 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3997735 00:27:09.152 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:09.152 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.152 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:09.152 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:09.152 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:09.152 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:09.152 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:09.152 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:09.152 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:09.152 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:09.152 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:09.152 rmmod nvme_tcp 00:27:09.413 rmmod nvme_fabrics 00:27:09.413 rmmod nvme_keyring 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3994037 ']' 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3994037 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3994037 ']' 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3994037 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3994037 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3994037' 00:27:09.413 killing process with pid 3994037 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3994037 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3994037 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.413 09:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.960 09:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:11.961 00:27:11.961 real 0m39.863s 00:27:11.961 user 2m3.137s 00:27:11.961 sys 0m8.412s 00:27:11.961 09:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:11.961 09:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:11.961 ************************************ 00:27:11.961 END TEST nvmf_failover 00:27:11.961 ************************************ 00:27:11.961 09:59:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:11.961 09:59:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:11.961 09:59:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:11.961 09:59:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.961 ************************************ 00:27:11.961 START TEST nvmf_host_discovery 00:27:11.961 ************************************ 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:11.961 * Looking for test storage... 00:27:11.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:11.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.961 --rc genhtml_branch_coverage=1 00:27:11.961 --rc genhtml_function_coverage=1 00:27:11.961 --rc genhtml_legend=1 00:27:11.961 --rc geninfo_all_blocks=1 00:27:11.961 --rc geninfo_unexecuted_blocks=1 00:27:11.961 00:27:11.961 ' 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:11.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.961 --rc genhtml_branch_coverage=1 00:27:11.961 --rc genhtml_function_coverage=1 00:27:11.961 --rc genhtml_legend=1 00:27:11.961 --rc geninfo_all_blocks=1 00:27:11.961 --rc geninfo_unexecuted_blocks=1 00:27:11.961 00:27:11.961 ' 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:11.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.961 --rc genhtml_branch_coverage=1 00:27:11.961 --rc genhtml_function_coverage=1 00:27:11.961 --rc genhtml_legend=1 00:27:11.961 --rc geninfo_all_blocks=1 00:27:11.961 --rc geninfo_unexecuted_blocks=1 00:27:11.961 00:27:11.961 ' 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:11.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.961 --rc genhtml_branch_coverage=1 00:27:11.961 --rc genhtml_function_coverage=1 00:27:11.961 --rc genhtml_legend=1 00:27:11.961 --rc geninfo_all_blocks=1 00:27:11.961 --rc geninfo_unexecuted_blocks=1 00:27:11.961 00:27:11.961 ' 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.961 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:11.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:27:11.962 09:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:20.103 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:20.103 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:20.103 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:20.104 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:20.104 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:20.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:27:20.104 00:27:20.104 --- 10.0.0.2 ping statistics --- 00:27:20.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.104 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:27:20.104 00:27:20.104 --- 10.0.0.1 ping statistics --- 00:27:20.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.104 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=4004111 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 4004111 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4004111 ']' 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:20.104 09:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.104 [2024-11-27 09:59:34.859910] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:27:20.104 [2024-11-27 09:59:34.859983] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.104 [2024-11-27 09:59:34.958862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.104 [2024-11-27 09:59:35.010063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.104 [2024-11-27 09:59:35.010117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.104 [2024-11-27 09:59:35.010126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.104 [2024-11-27 09:59:35.010134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.104 [2024-11-27 09:59:35.010140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.104 [2024-11-27 09:59:35.010949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.365 [2024-11-27 09:59:35.714461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.365 [2024-11-27 09:59:35.726708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:20.365 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.366 null0 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.366 null1 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4004452 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4004452 /tmp/host.sock 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4004452 ']' 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:20.366 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:20.366 09:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.366 [2024-11-27 09:59:35.827778] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:27:20.366 [2024-11-27 09:59:35.827886] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4004452 ] 00:27:20.627 [2024-11-27 09:59:35.921432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.627 [2024-11-27 09:59:35.974463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:21.218 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.478 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:21.479 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.739 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:21.739 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:21.740 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.740 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:21.740 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.740 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:21.740 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.740 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:21.740 09:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.740 [2024-11-27 09:59:37.009893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:21.740 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.001 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:27:22.001 09:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:22.261 [2024-11-27 09:59:37.672513] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:22.261 [2024-11-27 09:59:37.672533] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:22.261 [2024-11-27 09:59:37.672547] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:22.521 [2024-11-27 09:59:37.759840] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:22.521 [2024-11-27 09:59:37.983094] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:22.521 [2024-11-27 09:59:37.984056] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21217a0:1 started. 00:27:22.521 [2024-11-27 09:59:37.985660] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:22.521 [2024-11-27 09:59:37.985678] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:22.781 [2024-11-27 09:59:37.992447] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21217a0 was disconnected and freed. delete nvme_qpair. 00:27:22.781 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:22.781 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:22.781 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:22.781 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:22.781 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:22.781 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.781 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:22.781 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.781 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:23.042 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.042 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.042 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:23.042 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:23.042 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:23.042 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:23.043 [2024-11-27 09:59:38.437796] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2121cd0:1 started. 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:23.043 [2024-11-27 09:59:38.442569] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2121cd0 was disconnected and freed. delete nvme_qpair. 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.043 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.305 [2024-11-27 09:59:38.549864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:23.305 [2024-11-27 09:59:38.551044] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:23.305 [2024-11-27 09:59:38.551066] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.305 [2024-11-27 09:59:38.680475] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:23.305 09:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:23.567 [2024-11-27 09:59:38.943975] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:27:23.567 [2024-11-27 09:59:38.944013] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:23.567 [2024-11-27 09:59:38.944022] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:23.567 [2024-11-27 09:59:38.944027] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.513 [2024-11-27 09:59:39.817280] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:24.513 [2024-11-27 09:59:39.817302] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:24.513 [2024-11-27 09:59:39.817605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.513 [2024-11-27 09:59:39.817621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.513 [2024-11-27 09:59:39.817630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.513 [2024-11-27 09:59:39.817638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.513 [2024-11-27 09:59:39.817646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.513 [2024-11-27 09:59:39.817658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.513 [2024-11-27 09:59:39.817667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.513 [2024-11-27 09:59:39.817674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.513 [2024-11-27 09:59:39.817681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1e10 is same with the state(6) to be set 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:24.513 [2024-11-27 09:59:39.827617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f1e10 (9): Bad file descriptor 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.513 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.514 [2024-11-27 09:59:39.837653] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.514 [2024-11-27 09:59:39.837667] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.514 [2024-11-27 09:59:39.837672] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.514 [2024-11-27 09:59:39.837681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.514 [2024-11-27 09:59:39.837698] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.514 [2024-11-27 09:59:39.838024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.514 [2024-11-27 09:59:39.838038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f1e10 with addr=10.0.0.2, port=4420 00:27:24.514 [2024-11-27 09:59:39.838046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1e10 is same with the state(6) to be set 00:27:24.514 [2024-11-27 09:59:39.838058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f1e10 (9): Bad file descriptor 00:27:24.514 [2024-11-27 09:59:39.838069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.514 [2024-11-27 09:59:39.838076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.514 [2024-11-27 09:59:39.838084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.514 [2024-11-27 09:59:39.838090] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.514 [2024-11-27 09:59:39.838099] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.514 [2024-11-27 09:59:39.838105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.514 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.514 [2024-11-27 09:59:39.847730] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.514 [2024-11-27 09:59:39.847742] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.514 [2024-11-27 09:59:39.847746] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.514 [2024-11-27 09:59:39.847751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.514 [2024-11-27 09:59:39.847764] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.514 [2024-11-27 09:59:39.847935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.514 [2024-11-27 09:59:39.847947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f1e10 with addr=10.0.0.2, port=4420 00:27:24.514 [2024-11-27 09:59:39.847954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1e10 is same with the state(6) to be set 00:27:24.514 [2024-11-27 09:59:39.847965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f1e10 (9): Bad file descriptor 00:27:24.514 [2024-11-27 09:59:39.847975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.514 [2024-11-27 09:59:39.847982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.514 [2024-11-27 09:59:39.847989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.514 [2024-11-27 09:59:39.847996] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.514 [2024-11-27 09:59:39.848000] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.514 [2024-11-27 09:59:39.848005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.514 [2024-11-27 09:59:39.857797] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.514 [2024-11-27 09:59:39.857809] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.514 [2024-11-27 09:59:39.857813] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.514 [2024-11-27 09:59:39.857818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.514 [2024-11-27 09:59:39.857832] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.514 [2024-11-27 09:59:39.858142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.514 [2024-11-27 09:59:39.858154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f1e10 with addr=10.0.0.2, port=4420 00:27:24.514 [2024-11-27 09:59:39.858167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1e10 is same with the state(6) to be set 00:27:24.514 [2024-11-27 09:59:39.858179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f1e10 (9): Bad file descriptor 00:27:24.514 [2024-11-27 09:59:39.858190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.514 [2024-11-27 09:59:39.858197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.514 [2024-11-27 09:59:39.858205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.514 [2024-11-27 09:59:39.858214] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.514 [2024-11-27 09:59:39.858219] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.514 [2024-11-27 09:59:39.858223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.514 [2024-11-27 09:59:39.867864] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.514 [2024-11-27 09:59:39.867877] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.514 [2024-11-27 09:59:39.867882] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.514 [2024-11-27 09:59:39.867887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.514 [2024-11-27 09:59:39.867902] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.514 [2024-11-27 09:59:39.868183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.514 [2024-11-27 09:59:39.868197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f1e10 with addr=10.0.0.2, port=4420 00:27:24.514 [2024-11-27 09:59:39.868204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1e10 is same with the state(6) to be set 00:27:24.514 [2024-11-27 09:59:39.868216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f1e10 (9): Bad file descriptor 00:27:24.514 [2024-11-27 09:59:39.868227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.514 [2024-11-27 09:59:39.868234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.514 [2024-11-27 09:59:39.868241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.514 [2024-11-27 09:59:39.868247] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.514 [2024-11-27 09:59:39.868252] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.514 [2024-11-27 09:59:39.868257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.514 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.514 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.514 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:24.514 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:24.514 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.514 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.514 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:24.514 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:24.514 [2024-11-27 09:59:39.877934] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.514 [2024-11-27 09:59:39.877946] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.514 [2024-11-27 09:59:39.877951] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.514 [2024-11-27 09:59:39.877955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.514 [2024-11-27 09:59:39.877973] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.514 [2024-11-27 09:59:39.878366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.514 [2024-11-27 09:59:39.878405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f1e10 with addr=10.0.0.2, port=4420 00:27:24.514 [2024-11-27 09:59:39.878416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1e10 is same with the state(6) to be set 00:27:24.514 [2024-11-27 09:59:39.878435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f1e10 (9): Bad file descriptor 00:27:24.514 [2024-11-27 09:59:39.878448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.514 [2024-11-27 09:59:39.878455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.514 [2024-11-27 09:59:39.878463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.514 [2024-11-27 09:59:39.878470] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.514 [2024-11-27 09:59:39.878476] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.514 [2024-11-27 09:59:39.878480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.514 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:24.514 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.514 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:24.514 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:24.515 [2024-11-27 09:59:39.888005] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.515 [2024-11-27 09:59:39.888021] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.515 [2024-11-27 09:59:39.888026] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.515 [2024-11-27 09:59:39.888031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.515 [2024-11-27 09:59:39.888048] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.515 [2024-11-27 09:59:39.888460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.515 [2024-11-27 09:59:39.888498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f1e10 with addr=10.0.0.2, port=4420 00:27:24.515 [2024-11-27 09:59:39.888510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1e10 is same with the state(6) to be set 00:27:24.515 [2024-11-27 09:59:39.888529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f1e10 (9): Bad file descriptor 00:27:24.515 [2024-11-27 09:59:39.888541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.515 [2024-11-27 09:59:39.888548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.515 [2024-11-27 09:59:39.888557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.515 [2024-11-27 09:59:39.888564] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.515 [2024-11-27 09:59:39.888569] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.515 [2024-11-27 09:59:39.888582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.515 [2024-11-27 09:59:39.898081] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.515 [2024-11-27 09:59:39.898096] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.515 [2024-11-27 09:59:39.898101] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.515 [2024-11-27 09:59:39.898106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.515 [2024-11-27 09:59:39.898122] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.515 [2024-11-27 09:59:39.898395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.515 [2024-11-27 09:59:39.898409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f1e10 with addr=10.0.0.2, port=4420 00:27:24.515 [2024-11-27 09:59:39.898417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f1e10 is same with the state(6) to be set 00:27:24.515 [2024-11-27 09:59:39.898428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f1e10 (9): Bad file descriptor 00:27:24.515 [2024-11-27 09:59:39.898438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.515 [2024-11-27 09:59:39.898445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.515 [2024-11-27 09:59:39.898452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.515 [2024-11-27 09:59:39.898458] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.515 [2024-11-27 09:59:39.898463] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.515 [2024-11-27 09:59:39.898467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.515 [2024-11-27 09:59:39.905846] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:24.515 [2024-11-27 09:59:39.905864] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:24.515 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.778 09:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.778 09:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.163 [2024-11-27 09:59:41.262117] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:26.163 [2024-11-27 09:59:41.262131] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:26.163 [2024-11-27 09:59:41.262140] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:26.163 [2024-11-27 09:59:41.350399] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:26.424 [2024-11-27 09:59:41.659848] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:27:26.424 [2024-11-27 09:59:41.660503] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x20ef700:1 started. 00:27:26.424 [2024-11-27 09:59:41.661863] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:26.424 [2024-11-27 09:59:41.661886] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:26.424 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.424 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:26.424 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.425 request: 00:27:26.425 { 00:27:26.425 "name": "nvme", 00:27:26.425 "trtype": "tcp", 00:27:26.425 "traddr": "10.0.0.2", 00:27:26.425 "adrfam": "ipv4", 00:27:26.425 "trsvcid": "8009", 00:27:26.425 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:26.425 "wait_for_attach": true, 00:27:26.425 "method": "bdev_nvme_start_discovery", 00:27:26.425 "req_id": 1 00:27:26.425 } 00:27:26.425 Got JSON-RPC error response 00:27:26.425 response: 00:27:26.425 { 00:27:26.425 "code": -17, 00:27:26.425 "message": "File exists" 00:27:26.425 } 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.425 [2024-11-27 09:59:41.711815] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x20ef700 was disconnected and freed. delete nvme_qpair. 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.425 request: 00:27:26.425 { 00:27:26.425 "name": "nvme_second", 00:27:26.425 "trtype": "tcp", 00:27:26.425 "traddr": "10.0.0.2", 00:27:26.425 "adrfam": "ipv4", 00:27:26.425 "trsvcid": "8009", 00:27:26.425 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:26.425 "wait_for_attach": true, 00:27:26.425 "method": "bdev_nvme_start_discovery", 00:27:26.425 "req_id": 1 00:27:26.425 } 00:27:26.425 Got JSON-RPC error response 00:27:26.425 response: 00:27:26.425 { 00:27:26.425 "code": -17, 00:27:26.425 "message": "File exists" 00:27:26.425 } 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:26.425 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.687 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:26.687 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:26.687 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:26.687 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:26.687 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:26.687 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.687 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:26.687 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.687 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:26.687 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.687 09:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.629 [2024-11-27 09:59:42.909717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-11-27 09:59:42.909741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20eee70 with addr=10.0.0.2, port=8010 00:27:27.629 [2024-11-27 09:59:42.909751] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:27.629 [2024-11-27 09:59:42.909756] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:27.629 [2024-11-27 09:59:42.909761] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:28.570 [2024-11-27 09:59:43.912065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.570 [2024-11-27 09:59:43.912083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20eee70 with addr=10.0.0.2, port=8010 00:27:28.570 [2024-11-27 09:59:43.912091] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:28.570 [2024-11-27 09:59:43.912096] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:28.570 [2024-11-27 09:59:43.912100] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:29.513 [2024-11-27 09:59:44.914063] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:29.513 request: 00:27:29.513 { 00:27:29.513 "name": "nvme_second", 00:27:29.513 "trtype": "tcp", 00:27:29.513 "traddr": "10.0.0.2", 00:27:29.513 "adrfam": "ipv4", 00:27:29.513 "trsvcid": "8010", 00:27:29.513 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:29.513 "wait_for_attach": false, 00:27:29.513 "attach_timeout_ms": 3000, 00:27:29.513 "method": "bdev_nvme_start_discovery", 00:27:29.513 "req_id": 1 00:27:29.513 } 00:27:29.513 Got JSON-RPC error response 00:27:29.513 response: 00:27:29.513 { 00:27:29.513 "code": -110, 00:27:29.513 "message": "Connection timed out" 00:27:29.513 } 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4004452 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:29.513 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:29.514 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:29.514 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:29.514 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:29.514 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:29.514 09:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:29.774 rmmod nvme_tcp 00:27:29.774 rmmod nvme_fabrics 00:27:29.774 rmmod nvme_keyring 00:27:29.774 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:29.774 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:29.774 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:29.774 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 4004111 ']' 00:27:29.774 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 4004111 00:27:29.774 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 4004111 ']' 00:27:29.774 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 4004111 00:27:29.774 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:27:29.774 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.774 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4004111 00:27:29.774 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:29.774 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:29.775 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4004111' 00:27:29.775 killing process with pid 4004111 00:27:29.775 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 4004111 00:27:29.775 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 4004111 00:27:29.775 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:29.775 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:29.775 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:29.775 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:29.775 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:27:29.775 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:29.775 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:27:29.775 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:29.775 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:29.775 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.035 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.035 09:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.949 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:31.949 00:27:31.949 real 0m20.287s 00:27:31.949 user 0m23.488s 00:27:31.949 sys 0m7.209s 00:27:31.949 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.949 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.949 ************************************ 00:27:31.949 END TEST nvmf_host_discovery 00:27:31.949 ************************************ 00:27:31.949 09:59:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:31.949 09:59:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:31.949 09:59:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.949 09:59:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.949 ************************************ 00:27:31.949 START TEST nvmf_host_multipath_status 00:27:31.949 ************************************ 00:27:31.949 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:32.210 * Looking for test storage... 00:27:32.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:32.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.210 --rc genhtml_branch_coverage=1 00:27:32.210 --rc genhtml_function_coverage=1 00:27:32.210 --rc genhtml_legend=1 00:27:32.210 --rc geninfo_all_blocks=1 00:27:32.210 --rc geninfo_unexecuted_blocks=1 00:27:32.210 00:27:32.210 ' 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:32.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.210 --rc genhtml_branch_coverage=1 00:27:32.210 --rc genhtml_function_coverage=1 00:27:32.210 --rc genhtml_legend=1 00:27:32.210 --rc geninfo_all_blocks=1 00:27:32.210 --rc geninfo_unexecuted_blocks=1 00:27:32.210 00:27:32.210 ' 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:32.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.210 --rc genhtml_branch_coverage=1 00:27:32.210 --rc genhtml_function_coverage=1 00:27:32.210 --rc genhtml_legend=1 00:27:32.210 --rc geninfo_all_blocks=1 00:27:32.210 --rc geninfo_unexecuted_blocks=1 00:27:32.210 00:27:32.210 ' 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:32.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.210 --rc genhtml_branch_coverage=1 00:27:32.210 --rc genhtml_function_coverage=1 00:27:32.210 --rc genhtml_legend=1 00:27:32.210 --rc geninfo_all_blocks=1 00:27:32.210 --rc geninfo_unexecuted_blocks=1 00:27:32.210 00:27:32.210 ' 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.210 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:32.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:27:32.211 09:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:40.366 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.366 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:40.366 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:40.366 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:40.366 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:40.367 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:40.367 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:40.367 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:40.367 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:40.367 09:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.367 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.367 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.367 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:40.367 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:40.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:27:40.367 00:27:40.367 --- 10.0.0.2 ping statistics --- 00:27:40.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.367 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:27:40.367 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:27:40.367 00:27:40.367 --- 10.0.0.1 ping statistics --- 00:27:40.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.367 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:27:40.367 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=4010627 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 4010627 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 4010627 ']' 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.368 09:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:40.368 [2024-11-27 09:59:55.197545] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:27:40.368 [2024-11-27 09:59:55.197612] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.368 [2024-11-27 09:59:55.295984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:40.368 [2024-11-27 09:59:55.347626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.368 [2024-11-27 09:59:55.347680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.368 [2024-11-27 09:59:55.347690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.368 [2024-11-27 09:59:55.347698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.368 [2024-11-27 09:59:55.347704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.368 [2024-11-27 09:59:55.349296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.368 [2024-11-27 09:59:55.349323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.629 09:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.629 09:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:40.629 09:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.629 09:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:40.629 09:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:40.629 09:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.629 09:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4010627 00:27:40.629 09:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:40.890 [2024-11-27 09:59:56.236356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.890 09:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:41.151 Malloc0 00:27:41.151 09:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:41.413 09:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:41.674 09:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.674 [2024-11-27 09:59:57.067333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.674 09:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:41.936 [2024-11-27 09:59:57.267870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:41.936 09:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4011001 00:27:41.936 09:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:41.936 09:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:41.936 09:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4011001 /var/tmp/bdevperf.sock 00:27:41.936 09:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 4011001 ']' 00:27:41.936 09:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:41.936 09:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.936 09:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:41.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:41.936 09:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.936 09:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:42.878 09:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.878 09:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:42.878 09:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:43.139 09:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:43.398 Nvme0n1 00:27:43.398 09:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:43.658 Nvme0n1 00:27:43.658 09:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:43.658 09:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:46.202 10:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:46.202 10:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:46.202 10:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:46.202 10:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:47.146 10:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:47.146 10:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:47.146 10:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.146 10:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:47.407 10:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:47.407 10:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:47.407 10:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.407 10:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:47.407 10:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:47.407 10:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:47.407 10:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:47.407 10:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.669 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:47.669 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:47.669 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.669 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:47.929 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:47.929 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:47.929 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:47.929 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.189 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.189 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:48.189 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.189 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:48.189 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.189 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:48.189 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:48.449 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:48.709 10:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:49.673 10:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:49.673 10:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:49.673 10:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.673 10:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:50.049 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:50.049 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:50.049 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.049 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:50.049 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.049 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:50.049 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.049 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:50.049 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.049 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:50.326 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.326 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:50.326 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.326 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:50.326 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.326 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:50.585 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.585 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:50.585 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.585 10:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:50.585 10:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.585 10:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:50.585 10:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:50.845 10:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:51.104 10:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:52.042 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:52.042 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:52.042 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.042 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:52.301 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:52.301 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:52.301 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.301 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:52.301 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:52.301 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:52.561 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.561 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:52.561 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:52.561 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:52.561 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:52.561 10:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.821 10:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:52.821 10:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:52.821 10:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.821 10:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:53.081 10:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.081 10:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:53.081 10:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.081 10:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:53.081 10:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.081 10:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:53.081 10:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:53.341 10:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:53.600 10:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:54.539 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:54.539 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:54.539 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.539 10:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:54.803 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.803 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:54.803 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.803 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:54.803 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:54.803 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:54.803 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.803 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:55.062 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.062 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:55.062 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.062 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:55.322 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.322 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:55.322 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.322 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:55.582 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.582 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:55.582 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.582 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:55.582 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:55.582 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:55.582 10:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:55.842 10:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:56.102 10:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:57.041 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:57.041 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:57.041 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.041 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:57.301 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:57.301 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:57.301 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.302 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:57.302 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:57.302 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:57.302 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.302 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:57.561 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.561 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:57.561 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.561 10:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:57.822 10:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.822 10:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:57.822 10:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.822 10:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:57.822 10:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:57.822 10:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:57.822 10:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:57.822 10:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.082 10:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:58.083 10:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:58.083 10:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:58.343 10:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:58.604 10:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:59.545 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:59.545 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:59.545 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.545 10:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:59.545 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:59.806 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:59.806 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.806 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:59.806 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.806 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:59.806 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.806 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:00.067 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.067 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:00.067 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.067 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:00.328 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.328 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:00.328 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:00.328 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.328 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:00.328 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:00.328 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.328 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:00.588 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.588 10:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:00.848 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:00.848 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:01.108 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:01.108 10:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:02.049 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:02.050 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:02.341 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.341 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:02.341 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.341 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:02.341 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.341 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:02.601 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.601 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:02.601 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.601 10:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:02.601 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.601 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:02.601 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.601 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:02.862 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.862 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:02.862 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.862 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:03.122 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.122 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:03.122 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.122 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:03.383 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.383 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:03.383 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:03.383 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:03.644 10:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:04.587 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:04.587 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:04.587 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.587 10:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:04.848 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:04.848 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:04.848 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.848 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:05.108 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.108 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:05.108 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.108 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:05.108 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.108 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:05.108 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.108 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:05.369 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.369 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:05.369 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.369 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:05.629 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.629 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:05.629 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.629 10:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:05.889 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.889 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:05.890 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:05.890 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:06.149 10:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:07.089 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:07.089 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:07.089 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:07.089 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.351 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.351 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:07.351 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.351 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:07.612 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.612 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:07.612 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.612 10:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:07.612 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.612 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:07.613 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.613 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:07.873 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.873 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:07.873 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.873 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:08.133 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.133 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:08.133 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.133 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:08.395 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.395 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:08.395 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:08.395 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:08.656 10:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:09.598 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:09.598 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:09.598 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.598 10:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:09.860 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.860 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:09.860 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.860 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:10.120 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:10.120 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:10.120 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.120 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:10.120 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.120 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:10.120 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.120 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:10.381 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.381 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:10.381 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.381 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:10.642 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.642 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:10.642 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.642 10:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4011001 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 4011001 ']' 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 4011001 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4011001 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4011001' 00:28:10.906 killing process with pid 4011001 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 4011001 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 4011001 00:28:10.906 { 00:28:10.906 "results": [ 00:28:10.906 { 00:28:10.906 "job": "Nvme0n1", 00:28:10.906 "core_mask": "0x4", 00:28:10.906 "workload": "verify", 00:28:10.906 "status": "terminated", 00:28:10.906 "verify_range": { 00:28:10.906 "start": 0, 00:28:10.906 "length": 16384 00:28:10.906 }, 00:28:10.906 "queue_depth": 128, 00:28:10.906 "io_size": 4096, 00:28:10.906 "runtime": 26.938531, 00:28:10.906 "iops": 11936.731071193155, 00:28:10.906 "mibps": 46.62785574684826, 00:28:10.906 "io_failed": 0, 00:28:10.906 "io_timeout": 0, 00:28:10.906 "avg_latency_us": 10704.042641721451, 00:28:10.906 "min_latency_us": 307.2, 00:28:10.906 "max_latency_us": 3019898.88 00:28:10.906 } 00:28:10.906 ], 00:28:10.906 "core_count": 1 00:28:10.906 } 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4011001 00:28:10.906 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:10.906 [2024-11-27 09:59:57.348622] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:28:10.906 [2024-11-27 09:59:57.348701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011001 ] 00:28:10.906 [2024-11-27 09:59:57.440463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.906 [2024-11-27 09:59:57.490769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.906 Running I/O for 90 seconds... 00:28:10.906 10184.00 IOPS, 39.78 MiB/s [2024-11-27T09:00:26.372Z] 10668.00 IOPS, 41.67 MiB/s [2024-11-27T09:00:26.372Z] 10825.67 IOPS, 42.29 MiB/s [2024-11-27T09:00:26.372Z] 11059.00 IOPS, 43.20 MiB/s [2024-11-27T09:00:26.372Z] 11439.40 IOPS, 44.69 MiB/s [2024-11-27T09:00:26.372Z] 11692.33 IOPS, 45.67 MiB/s [2024-11-27T09:00:26.372Z] 11890.00 IOPS, 46.45 MiB/s [2024-11-27T09:00:26.372Z] 12016.88 IOPS, 46.94 MiB/s [2024-11-27T09:00:26.372Z] 12133.89 IOPS, 47.40 MiB/s [2024-11-27T09:00:26.372Z] 12206.70 IOPS, 47.68 MiB/s [2024-11-27T09:00:26.372Z] 12277.27 IOPS, 47.96 MiB/s [2024-11-27T09:00:26.372Z] [2024-11-27 10:00:11.127828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.906 [2024-11-27 10:00:11.127861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:10.906 [2024-11-27 10:00:11.127892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.906 [2024-11-27 10:00:11.127898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:10.906 [2024-11-27 10:00:11.127910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.906 [2024-11-27 10:00:11.127915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:10.906 [2024-11-27 10:00:11.127926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.906 [2024-11-27 10:00:11.127931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:10.906 [2024-11-27 10:00:11.127941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.906 [2024-11-27 10:00:11.127947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:10.906 [2024-11-27 10:00:11.127957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.906 [2024-11-27 10:00:11.127962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:10.906 [2024-11-27 10:00:11.127972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.906 [2024-11-27 10:00:11.127978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:10.906 [2024-11-27 10:00:11.127988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.906 [2024-11-27 10:00:11.127994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:10.906 [2024-11-27 10:00:11.128004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.906 [2024-11-27 10:00:11.128009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:10.906 [2024-11-27 10:00:11.128019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.906 [2024-11-27 10:00:11.128029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:10.906 [2024-11-27 10:00:11.128040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.907 [2024-11-27 10:00:11.128045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.907 [2024-11-27 10:00:11.128061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.907 [2024-11-27 10:00:11.128503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.128988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.128999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.129005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.129893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.129902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.129915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.129920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.129935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.129940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.129952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.129957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:10.907 [2024-11-27 10:00:11.129970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.907 [2024-11-27 10:00:11.129975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.129987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.129992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.908 [2024-11-27 10:00:11.130705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:10.908 [2024-11-27 10:00:11.130718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.130982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.130995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.131001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.909 [2024-11-27 10:00:11.131019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.909 [2024-11-27 10:00:11.131543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:10.909 [2024-11-27 10:00:11.131558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:11.131563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:11.131579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.910 [2024-11-27 10:00:11.131585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:10.910 12232.17 IOPS, 47.78 MiB/s [2024-11-27T09:00:26.376Z] 11291.23 IOPS, 44.11 MiB/s [2024-11-27T09:00:26.376Z] 10484.71 IOPS, 40.96 MiB/s [2024-11-27T09:00:26.376Z] 9867.13 IOPS, 38.54 MiB/s [2024-11-27T09:00:26.376Z] 10063.81 IOPS, 39.31 MiB/s [2024-11-27T09:00:26.376Z] 10235.00 IOPS, 39.98 MiB/s [2024-11-27T09:00:26.376Z] 10587.67 IOPS, 41.36 MiB/s [2024-11-27T09:00:26.376Z] 10921.21 IOPS, 42.66 MiB/s [2024-11-27T09:00:26.376Z] 11141.35 IOPS, 43.52 MiB/s [2024-11-27T09:00:26.376Z] 11230.48 IOPS, 43.87 MiB/s [2024-11-27T09:00:26.376Z] 11304.86 IOPS, 44.16 MiB/s [2024-11-27T09:00:26.376Z] 11498.30 IOPS, 44.92 MiB/s [2024-11-27T09:00:26.376Z] 11722.21 IOPS, 45.79 MiB/s [2024-11-27T09:00:26.376Z] [2024-11-27 10:00:23.955269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.955665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.910 [2024-11-27 10:00:23.955670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.956546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.910 [2024-11-27 10:00:23.956558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.956570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.910 [2024-11-27 10:00:23.956575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.956585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.910 [2024-11-27 10:00:23.956590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.956600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.910 [2024-11-27 10:00:23.956608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.956619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.910 [2024-11-27 10:00:23.956624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:10.910 [2024-11-27 10:00:23.956634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.911 [2024-11-27 10:00:23.956639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:10.911 [2024-11-27 10:00:23.956649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.911 [2024-11-27 10:00:23.956654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:10.911 [2024-11-27 10:00:23.956664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.911 [2024-11-27 10:00:23.956670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.911 [2024-11-27 10:00:23.956680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.911 [2024-11-27 10:00:23.956685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:10.911 [2024-11-27 10:00:23.956868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.911 [2024-11-27 10:00:23.956877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:10.911 [2024-11-27 10:00:23.956888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.911 [2024-11-27 10:00:23.956894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:10.911 [2024-11-27 10:00:23.956904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.911 [2024-11-27 10:00:23.956909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.911 [2024-11-27 10:00:23.956919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.911 [2024-11-27 10:00:23.956924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:10.911 11880.32 IOPS, 46.41 MiB/s [2024-11-27T09:00:26.377Z] 11910.77 IOPS, 46.53 MiB/s [2024-11-27T09:00:26.377Z] Received shutdown signal, test time was about 26.939138 seconds 00:28:10.911 00:28:10.911 Latency(us) 00:28:10.911 [2024-11-27T09:00:26.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.911 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:10.911 Verification LBA range: start 0x0 length 0x4000 00:28:10.911 Nvme0n1 : 26.94 11936.73 46.63 0.00 0.00 10704.04 307.20 3019898.88 00:28:10.911 [2024-11-27T09:00:26.377Z] =================================================================================================================== 00:28:10.911 [2024-11-27T09:00:26.377Z] Total : 11936.73 46.63 0.00 0.00 10704.04 307.20 3019898.88 00:28:10.911 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.172 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:11.172 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:11.172 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:11.172 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:11.172 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:11.173 rmmod nvme_tcp 00:28:11.173 rmmod nvme_fabrics 00:28:11.173 rmmod nvme_keyring 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 4010627 ']' 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 4010627 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 4010627 ']' 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 4010627 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:11.173 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4010627 00:28:11.434 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:11.434 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:11.434 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4010627' 00:28:11.434 killing process with pid 4010627 00:28:11.434 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 4010627 00:28:11.435 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 4010627 00:28:11.435 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:11.435 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:11.435 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:11.435 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:11.435 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:28:11.435 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:11.435 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:28:11.435 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:11.435 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:11.435 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.435 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.435 10:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.978 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:13.978 00:28:13.978 real 0m41.441s 00:28:13.978 user 1m47.331s 00:28:13.978 sys 0m11.576s 00:28:13.978 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:13.978 10:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:13.978 ************************************ 00:28:13.978 END TEST nvmf_host_multipath_status 00:28:13.978 ************************************ 00:28:13.978 10:00:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:13.978 10:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:13.978 10:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:13.978 10:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.978 ************************************ 00:28:13.978 START TEST nvmf_discovery_remove_ifc 00:28:13.978 ************************************ 00:28:13.978 10:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:13.978 * Looking for test storage... 00:28:13.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:13.978 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:13.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.978 --rc genhtml_branch_coverage=1 00:28:13.978 --rc genhtml_function_coverage=1 00:28:13.978 --rc genhtml_legend=1 00:28:13.979 --rc geninfo_all_blocks=1 00:28:13.979 --rc geninfo_unexecuted_blocks=1 00:28:13.979 00:28:13.979 ' 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.979 --rc genhtml_branch_coverage=1 00:28:13.979 --rc genhtml_function_coverage=1 00:28:13.979 --rc genhtml_legend=1 00:28:13.979 --rc geninfo_all_blocks=1 00:28:13.979 --rc geninfo_unexecuted_blocks=1 00:28:13.979 00:28:13.979 ' 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.979 --rc genhtml_branch_coverage=1 00:28:13.979 --rc genhtml_function_coverage=1 00:28:13.979 --rc genhtml_legend=1 00:28:13.979 --rc geninfo_all_blocks=1 00:28:13.979 --rc geninfo_unexecuted_blocks=1 00:28:13.979 00:28:13.979 ' 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.979 --rc genhtml_branch_coverage=1 00:28:13.979 --rc genhtml_function_coverage=1 00:28:13.979 --rc genhtml_legend=1 00:28:13.979 --rc geninfo_all_blocks=1 00:28:13.979 --rc geninfo_unexecuted_blocks=1 00:28:13.979 00:28:13.979 ' 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:13.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.979 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.980 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:13.980 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:13.980 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:28:13.980 10:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:22.129 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:22.129 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:22.129 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:22.129 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.129 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:28:22.130 00:28:22.130 --- 10.0.0.2 ping statistics --- 00:28:22.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.130 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:28:22.130 00:28:22.130 --- 10.0.0.1 ping statistics --- 00:28:22.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.130 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=4021464 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 4021464 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 4021464 ']' 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.130 10:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.130 [2024-11-27 10:00:36.752787] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:28:22.130 [2024-11-27 10:00:36.752855] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.130 [2024-11-27 10:00:36.852603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.130 [2024-11-27 10:00:36.903259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.130 [2024-11-27 10:00:36.903310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.130 [2024-11-27 10:00:36.903319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.130 [2024-11-27 10:00:36.903326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.130 [2024-11-27 10:00:36.903332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.130 [2024-11-27 10:00:36.904104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.130 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.130 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:22.130 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.130 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.130 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.391 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.391 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:22.391 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.391 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.391 [2024-11-27 10:00:37.618038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.391 [2024-11-27 10:00:37.626285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:22.391 null0 00:28:22.391 [2024-11-27 10:00:37.658276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.391 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.391 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4021804 00:28:22.391 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:22.391 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4021804 /tmp/host.sock 00:28:22.391 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 4021804 ']' 00:28:22.392 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:22.392 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.392 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:22.392 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:22.392 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.392 10:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.392 [2024-11-27 10:00:37.734467] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:28:22.392 [2024-11-27 10:00:37.734531] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4021804 ] 00:28:22.392 [2024-11-27 10:00:37.827662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.652 [2024-11-27 10:00:37.880785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.224 10:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:24.616 [2024-11-27 10:00:39.727108] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:24.616 [2024-11-27 10:00:39.727129] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:24.616 [2024-11-27 10:00:39.727142] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:24.616 [2024-11-27 10:00:39.854582] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:24.616 [2024-11-27 10:00:40.077002] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:24.616 [2024-11-27 10:00:40.077989] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10c23f0:1 started. 00:28:24.616 [2024-11-27 10:00:40.079574] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:24.616 [2024-11-27 10:00:40.079616] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:24.616 [2024-11-27 10:00:40.079639] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:24.616 [2024-11-27 10:00:40.079653] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:24.616 [2024-11-27 10:00:40.079674] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:24.616 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.616 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:24.876 [2024-11-27 10:00:40.086666] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10c23f0 was disconnected and freed. delete nvme_qpair. 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:24.876 10:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:26.260 10:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:26.260 10:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.260 10:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:26.260 10:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.260 10:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:26.260 10:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:26.260 10:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:26.260 10:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.260 10:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:26.260 10:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:27.201 10:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:27.201 10:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:27.201 10:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:27.201 10:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.201 10:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:27.201 10:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:27.201 10:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:27.201 10:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.201 10:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:27.201 10:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:28.145 10:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:28.145 10:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:28.145 10:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:28.145 10:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.145 10:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:28.145 10:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:28.145 10:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:28.145 10:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.145 10:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:28.145 10:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:29.086 10:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:29.086 10:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:29.086 10:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:29.086 10:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.086 10:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:29.086 10:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:29.086 10:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:29.086 10:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.086 10:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:29.086 10:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:30.469 [2024-11-27 10:00:45.520184] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:30.469 [2024-11-27 10:00:45.520218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.469 [2024-11-27 10:00:45.520227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.469 [2024-11-27 10:00:45.520234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.469 [2024-11-27 10:00:45.520240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.469 [2024-11-27 10:00:45.520250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.469 [2024-11-27 10:00:45.520255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.469 [2024-11-27 10:00:45.520261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.469 [2024-11-27 10:00:45.520266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.469 [2024-11-27 10:00:45.520272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.469 [2024-11-27 10:00:45.520277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.469 [2024-11-27 10:00:45.520282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109ec00 is same with the state(6) to be set 00:28:30.469 [2024-11-27 10:00:45.530206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109ec00 (9): Bad file descriptor 00:28:30.469 10:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:30.469 10:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:30.469 10:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:30.469 10:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.469 10:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:30.469 10:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:30.469 10:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:30.469 [2024-11-27 10:00:45.540238] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:30.469 [2024-11-27 10:00:45.540248] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:30.469 [2024-11-27 10:00:45.540251] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:30.469 [2024-11-27 10:00:45.540255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:30.469 [2024-11-27 10:00:45.540269] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:31.410 [2024-11-27 10:00:46.555242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:31.410 [2024-11-27 10:00:46.555333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109ec00 with addr=10.0.0.2, port=4420 00:28:31.410 [2024-11-27 10:00:46.555364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109ec00 is same with the state(6) to be set 00:28:31.410 [2024-11-27 10:00:46.555420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109ec00 (9): Bad file descriptor 00:28:31.410 [2024-11-27 10:00:46.556551] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:28:31.410 [2024-11-27 10:00:46.556622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:31.410 [2024-11-27 10:00:46.556645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:31.410 [2024-11-27 10:00:46.556668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:31.410 [2024-11-27 10:00:46.556688] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:31.410 [2024-11-27 10:00:46.556704] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:31.410 [2024-11-27 10:00:46.556729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:31.410 [2024-11-27 10:00:46.556753] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:31.410 [2024-11-27 10:00:46.556768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:31.410 10:00:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.410 10:00:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:31.410 10:00:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:32.353 [2024-11-27 10:00:47.559189] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:32.353 [2024-11-27 10:00:47.559206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:32.353 [2024-11-27 10:00:47.559214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:32.353 [2024-11-27 10:00:47.559220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:32.353 [2024-11-27 10:00:47.559225] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:28:32.353 [2024-11-27 10:00:47.559230] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:32.353 [2024-11-27 10:00:47.559234] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:32.353 [2024-11-27 10:00:47.559237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:32.353 [2024-11-27 10:00:47.559254] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:32.353 [2024-11-27 10:00:47.559270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.353 [2024-11-27 10:00:47.559278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.353 [2024-11-27 10:00:47.559285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.353 [2024-11-27 10:00:47.559291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.353 [2024-11-27 10:00:47.559296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.353 [2024-11-27 10:00:47.559301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.353 [2024-11-27 10:00:47.559307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.353 [2024-11-27 10:00:47.559312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.353 [2024-11-27 10:00:47.559319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.353 [2024-11-27 10:00:47.559324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.353 [2024-11-27 10:00:47.559329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:28:32.353 [2024-11-27 10:00:47.559700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108e340 (9): Bad file descriptor 00:28:32.353 [2024-11-27 10:00:47.560710] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:32.353 [2024-11-27 10:00:47.560723] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:32.353 10:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:33.740 10:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:33.740 10:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.740 10:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:33.740 10:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.740 10:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.740 10:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:33.740 10:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:33.740 10:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.740 10:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:33.740 10:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:34.310 [2024-11-27 10:00:49.613132] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:34.310 [2024-11-27 10:00:49.613145] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:34.310 [2024-11-27 10:00:49.613155] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:34.310 [2024-11-27 10:00:49.742542] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:34.572 10:00:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:34.572 10:00:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:34.572 10:00:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:34.572 10:00:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.572 10:00:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:34.572 10:00:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.572 10:00:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:34.572 10:00:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.572 10:00:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:34.572 10:00:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:34.572 [2024-11-27 10:00:49.925563] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:28:34.572 [2024-11-27 10:00:49.926074] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1093120:1 started. 00:28:34.572 [2024-11-27 10:00:49.926951] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:34.572 [2024-11-27 10:00:49.926978] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:34.572 [2024-11-27 10:00:49.926992] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:34.572 [2024-11-27 10:00:49.927003] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:34.572 [2024-11-27 10:00:49.927008] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:34.572 [2024-11-27 10:00:49.930350] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1093120 was disconnected and freed. delete nvme_qpair. 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4021804 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 4021804 ']' 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 4021804 00:28:35.515 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:35.516 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.516 10:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4021804 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4021804' 00:28:35.776 killing process with pid 4021804 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 4021804 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 4021804 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.776 rmmod nvme_tcp 00:28:35.776 rmmod nvme_fabrics 00:28:35.776 rmmod nvme_keyring 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 4021464 ']' 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 4021464 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 4021464 ']' 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 4021464 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:35.776 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.777 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4021464 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4021464' 00:28:36.038 killing process with pid 4021464 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 4021464 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 4021464 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.038 10:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.116 10:00:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:38.116 00:28:38.116 real 0m24.529s 00:28:38.116 user 0m29.670s 00:28:38.116 sys 0m7.213s 00:28:38.116 10:00:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:38.116 10:00:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:38.116 ************************************ 00:28:38.116 END TEST nvmf_discovery_remove_ifc 00:28:38.116 ************************************ 00:28:38.116 10:00:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:38.116 10:00:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:38.116 10:00:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:38.116 10:00:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.116 ************************************ 00:28:38.116 START TEST nvmf_identify_kernel_target 00:28:38.116 ************************************ 00:28:38.116 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:38.378 * Looking for test storage... 00:28:38.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:38.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.378 --rc genhtml_branch_coverage=1 00:28:38.378 --rc genhtml_function_coverage=1 00:28:38.378 --rc genhtml_legend=1 00:28:38.378 --rc geninfo_all_blocks=1 00:28:38.378 --rc geninfo_unexecuted_blocks=1 00:28:38.378 00:28:38.378 ' 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:38.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.378 --rc genhtml_branch_coverage=1 00:28:38.378 --rc genhtml_function_coverage=1 00:28:38.378 --rc genhtml_legend=1 00:28:38.378 --rc geninfo_all_blocks=1 00:28:38.378 --rc geninfo_unexecuted_blocks=1 00:28:38.378 00:28:38.378 ' 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:38.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.378 --rc genhtml_branch_coverage=1 00:28:38.378 --rc genhtml_function_coverage=1 00:28:38.378 --rc genhtml_legend=1 00:28:38.378 --rc geninfo_all_blocks=1 00:28:38.378 --rc geninfo_unexecuted_blocks=1 00:28:38.378 00:28:38.378 ' 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:38.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.378 --rc genhtml_branch_coverage=1 00:28:38.378 --rc genhtml_function_coverage=1 00:28:38.378 --rc genhtml_legend=1 00:28:38.378 --rc geninfo_all_blocks=1 00:28:38.378 --rc geninfo_unexecuted_blocks=1 00:28:38.378 00:28:38.378 ' 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.378 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:38.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:38.379 10:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.520 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:46.521 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:46.521 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:46.521 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:46.521 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.521 10:01:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.521 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.521 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.521 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:46.521 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.521 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.521 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.521 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:46.521 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:46.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:28:46.521 00:28:46.521 --- 10.0.0.2 ping statistics --- 00:28:46.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.521 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:28:46.521 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:28:46.521 00:28:46.522 --- 10.0.0.1 ping statistics --- 00:28:46.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.522 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:46.522 10:01:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:49.823 Waiting for block devices as requested 00:28:49.823 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:49.823 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:49.823 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:49.823 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:49.823 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:49.823 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:49.823 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:50.083 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:50.083 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:50.343 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:50.343 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:50.343 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:50.604 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:50.604 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:50.604 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:50.866 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:50.866 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:51.127 No valid GPT data, bailing 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:51.127 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:51.387 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:51.387 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:51.387 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:51.387 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:51.387 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:51.387 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:51.387 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:51.387 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:51.387 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:51.387 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:51.387 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:51.387 00:28:51.387 Discovery Log Number of Records 2, Generation counter 2 00:28:51.387 =====Discovery Log Entry 0====== 00:28:51.387 trtype: tcp 00:28:51.387 adrfam: ipv4 00:28:51.387 subtype: current discovery subsystem 00:28:51.387 treq: not specified, sq flow control disable supported 00:28:51.387 portid: 1 00:28:51.387 trsvcid: 4420 00:28:51.387 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:51.387 traddr: 10.0.0.1 00:28:51.387 eflags: none 00:28:51.387 sectype: none 00:28:51.387 =====Discovery Log Entry 1====== 00:28:51.387 trtype: tcp 00:28:51.387 adrfam: ipv4 00:28:51.387 subtype: nvme subsystem 00:28:51.387 treq: not specified, sq flow control disable supported 00:28:51.387 portid: 1 00:28:51.387 trsvcid: 4420 00:28:51.387 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:51.387 traddr: 10.0.0.1 00:28:51.387 eflags: none 00:28:51.387 sectype: none 00:28:51.387 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:51.387 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:51.387 ===================================================== 00:28:51.387 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:51.387 ===================================================== 00:28:51.387 Controller Capabilities/Features 00:28:51.387 ================================ 00:28:51.387 Vendor ID: 0000 00:28:51.387 Subsystem Vendor ID: 0000 00:28:51.387 Serial Number: 668589017e78e0dceb91 00:28:51.387 Model Number: Linux 00:28:51.387 Firmware Version: 6.8.9-20 00:28:51.387 Recommended Arb Burst: 0 00:28:51.387 IEEE OUI Identifier: 00 00 00 00:28:51.387 Multi-path I/O 00:28:51.387 May have multiple subsystem ports: No 00:28:51.387 May have multiple controllers: No 00:28:51.387 Associated with SR-IOV VF: No 00:28:51.387 Max Data Transfer Size: Unlimited 00:28:51.387 Max Number of Namespaces: 0 00:28:51.387 Max Number of I/O Queues: 1024 00:28:51.387 NVMe Specification Version (VS): 1.3 00:28:51.387 NVMe Specification Version (Identify): 1.3 00:28:51.387 Maximum Queue Entries: 1024 00:28:51.387 Contiguous Queues Required: No 00:28:51.387 Arbitration Mechanisms Supported 00:28:51.387 Weighted Round Robin: Not Supported 00:28:51.387 Vendor Specific: Not Supported 00:28:51.387 Reset Timeout: 7500 ms 00:28:51.387 Doorbell Stride: 4 bytes 00:28:51.387 NVM Subsystem Reset: Not Supported 00:28:51.387 Command Sets Supported 00:28:51.387 NVM Command Set: Supported 00:28:51.387 Boot Partition: Not Supported 00:28:51.387 Memory Page Size Minimum: 4096 bytes 00:28:51.387 Memory Page Size Maximum: 4096 bytes 00:28:51.387 Persistent Memory Region: Not Supported 00:28:51.387 Optional Asynchronous Events Supported 00:28:51.387 Namespace Attribute Notices: Not Supported 00:28:51.387 Firmware Activation Notices: Not Supported 00:28:51.387 ANA Change Notices: Not Supported 00:28:51.387 PLE Aggregate Log Change Notices: Not Supported 00:28:51.387 LBA Status Info Alert Notices: Not Supported 00:28:51.387 EGE Aggregate Log Change Notices: Not Supported 00:28:51.387 Normal NVM Subsystem Shutdown event: Not Supported 00:28:51.387 Zone Descriptor Change Notices: Not Supported 00:28:51.387 Discovery Log Change Notices: Supported 00:28:51.387 Controller Attributes 00:28:51.387 128-bit Host Identifier: Not Supported 00:28:51.387 Non-Operational Permissive Mode: Not Supported 00:28:51.387 NVM Sets: Not Supported 00:28:51.387 Read Recovery Levels: Not Supported 00:28:51.387 Endurance Groups: Not Supported 00:28:51.387 Predictable Latency Mode: Not Supported 00:28:51.387 Traffic Based Keep ALive: Not Supported 00:28:51.387 Namespace Granularity: Not Supported 00:28:51.387 SQ Associations: Not Supported 00:28:51.387 UUID List: Not Supported 00:28:51.387 Multi-Domain Subsystem: Not Supported 00:28:51.387 Fixed Capacity Management: Not Supported 00:28:51.387 Variable Capacity Management: Not Supported 00:28:51.387 Delete Endurance Group: Not Supported 00:28:51.387 Delete NVM Set: Not Supported 00:28:51.388 Extended LBA Formats Supported: Not Supported 00:28:51.388 Flexible Data Placement Supported: Not Supported 00:28:51.388 00:28:51.388 Controller Memory Buffer Support 00:28:51.388 ================================ 00:28:51.388 Supported: No 00:28:51.388 00:28:51.388 Persistent Memory Region Support 00:28:51.388 ================================ 00:28:51.388 Supported: No 00:28:51.388 00:28:51.388 Admin Command Set Attributes 00:28:51.388 ============================ 00:28:51.388 Security Send/Receive: Not Supported 00:28:51.388 Format NVM: Not Supported 00:28:51.388 Firmware Activate/Download: Not Supported 00:28:51.388 Namespace Management: Not Supported 00:28:51.388 Device Self-Test: Not Supported 00:28:51.388 Directives: Not Supported 00:28:51.388 NVMe-MI: Not Supported 00:28:51.388 Virtualization Management: Not Supported 00:28:51.388 Doorbell Buffer Config: Not Supported 00:28:51.388 Get LBA Status Capability: Not Supported 00:28:51.388 Command & Feature Lockdown Capability: Not Supported 00:28:51.388 Abort Command Limit: 1 00:28:51.388 Async Event Request Limit: 1 00:28:51.388 Number of Firmware Slots: N/A 00:28:51.388 Firmware Slot 1 Read-Only: N/A 00:28:51.388 Firmware Activation Without Reset: N/A 00:28:51.388 Multiple Update Detection Support: N/A 00:28:51.388 Firmware Update Granularity: No Information Provided 00:28:51.388 Per-Namespace SMART Log: No 00:28:51.388 Asymmetric Namespace Access Log Page: Not Supported 00:28:51.388 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:51.388 Command Effects Log Page: Not Supported 00:28:51.388 Get Log Page Extended Data: Supported 00:28:51.388 Telemetry Log Pages: Not Supported 00:28:51.388 Persistent Event Log Pages: Not Supported 00:28:51.388 Supported Log Pages Log Page: May Support 00:28:51.388 Commands Supported & Effects Log Page: Not Supported 00:28:51.388 Feature Identifiers & Effects Log Page:May Support 00:28:51.388 NVMe-MI Commands & Effects Log Page: May Support 00:28:51.388 Data Area 4 for Telemetry Log: Not Supported 00:28:51.388 Error Log Page Entries Supported: 1 00:28:51.388 Keep Alive: Not Supported 00:28:51.388 00:28:51.388 NVM Command Set Attributes 00:28:51.388 ========================== 00:28:51.388 Submission Queue Entry Size 00:28:51.388 Max: 1 00:28:51.388 Min: 1 00:28:51.388 Completion Queue Entry Size 00:28:51.388 Max: 1 00:28:51.388 Min: 1 00:28:51.388 Number of Namespaces: 0 00:28:51.388 Compare Command: Not Supported 00:28:51.388 Write Uncorrectable Command: Not Supported 00:28:51.388 Dataset Management Command: Not Supported 00:28:51.388 Write Zeroes Command: Not Supported 00:28:51.388 Set Features Save Field: Not Supported 00:28:51.388 Reservations: Not Supported 00:28:51.388 Timestamp: Not Supported 00:28:51.388 Copy: Not Supported 00:28:51.388 Volatile Write Cache: Not Present 00:28:51.388 Atomic Write Unit (Normal): 1 00:28:51.388 Atomic Write Unit (PFail): 1 00:28:51.388 Atomic Compare & Write Unit: 1 00:28:51.388 Fused Compare & Write: Not Supported 00:28:51.388 Scatter-Gather List 00:28:51.388 SGL Command Set: Supported 00:28:51.388 SGL Keyed: Not Supported 00:28:51.388 SGL Bit Bucket Descriptor: Not Supported 00:28:51.388 SGL Metadata Pointer: Not Supported 00:28:51.388 Oversized SGL: Not Supported 00:28:51.388 SGL Metadata Address: Not Supported 00:28:51.388 SGL Offset: Supported 00:28:51.388 Transport SGL Data Block: Not Supported 00:28:51.388 Replay Protected Memory Block: Not Supported 00:28:51.388 00:28:51.388 Firmware Slot Information 00:28:51.388 ========================= 00:28:51.388 Active slot: 0 00:28:51.388 00:28:51.388 00:28:51.388 Error Log 00:28:51.388 ========= 00:28:51.388 00:28:51.388 Active Namespaces 00:28:51.388 ================= 00:28:51.388 Discovery Log Page 00:28:51.388 ================== 00:28:51.388 Generation Counter: 2 00:28:51.388 Number of Records: 2 00:28:51.388 Record Format: 0 00:28:51.388 00:28:51.388 Discovery Log Entry 0 00:28:51.388 ---------------------- 00:28:51.388 Transport Type: 3 (TCP) 00:28:51.388 Address Family: 1 (IPv4) 00:28:51.388 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:51.388 Entry Flags: 00:28:51.388 Duplicate Returned Information: 0 00:28:51.388 Explicit Persistent Connection Support for Discovery: 0 00:28:51.388 Transport Requirements: 00:28:51.388 Secure Channel: Not Specified 00:28:51.388 Port ID: 1 (0x0001) 00:28:51.388 Controller ID: 65535 (0xffff) 00:28:51.388 Admin Max SQ Size: 32 00:28:51.388 Transport Service Identifier: 4420 00:28:51.388 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:51.388 Transport Address: 10.0.0.1 00:28:51.388 Discovery Log Entry 1 00:28:51.388 ---------------------- 00:28:51.388 Transport Type: 3 (TCP) 00:28:51.388 Address Family: 1 (IPv4) 00:28:51.388 Subsystem Type: 2 (NVM Subsystem) 00:28:51.388 Entry Flags: 00:28:51.388 Duplicate Returned Information: 0 00:28:51.388 Explicit Persistent Connection Support for Discovery: 0 00:28:51.388 Transport Requirements: 00:28:51.388 Secure Channel: Not Specified 00:28:51.388 Port ID: 1 (0x0001) 00:28:51.388 Controller ID: 65535 (0xffff) 00:28:51.388 Admin Max SQ Size: 32 00:28:51.388 Transport Service Identifier: 4420 00:28:51.388 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:51.388 Transport Address: 10.0.0.1 00:28:51.388 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:51.649 get_feature(0x01) failed 00:28:51.649 get_feature(0x02) failed 00:28:51.649 get_feature(0x04) failed 00:28:51.649 ===================================================== 00:28:51.649 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:51.649 ===================================================== 00:28:51.649 Controller Capabilities/Features 00:28:51.649 ================================ 00:28:51.649 Vendor ID: 0000 00:28:51.649 Subsystem Vendor ID: 0000 00:28:51.649 Serial Number: a620bced4f398bfcbd55 00:28:51.649 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:51.649 Firmware Version: 6.8.9-20 00:28:51.649 Recommended Arb Burst: 6 00:28:51.649 IEEE OUI Identifier: 00 00 00 00:28:51.649 Multi-path I/O 00:28:51.649 May have multiple subsystem ports: Yes 00:28:51.649 May have multiple controllers: Yes 00:28:51.649 Associated with SR-IOV VF: No 00:28:51.649 Max Data Transfer Size: Unlimited 00:28:51.649 Max Number of Namespaces: 1024 00:28:51.649 Max Number of I/O Queues: 128 00:28:51.649 NVMe Specification Version (VS): 1.3 00:28:51.649 NVMe Specification Version (Identify): 1.3 00:28:51.649 Maximum Queue Entries: 1024 00:28:51.649 Contiguous Queues Required: No 00:28:51.649 Arbitration Mechanisms Supported 00:28:51.649 Weighted Round Robin: Not Supported 00:28:51.649 Vendor Specific: Not Supported 00:28:51.649 Reset Timeout: 7500 ms 00:28:51.649 Doorbell Stride: 4 bytes 00:28:51.649 NVM Subsystem Reset: Not Supported 00:28:51.649 Command Sets Supported 00:28:51.649 NVM Command Set: Supported 00:28:51.649 Boot Partition: Not Supported 00:28:51.649 Memory Page Size Minimum: 4096 bytes 00:28:51.650 Memory Page Size Maximum: 4096 bytes 00:28:51.650 Persistent Memory Region: Not Supported 00:28:51.650 Optional Asynchronous Events Supported 00:28:51.650 Namespace Attribute Notices: Supported 00:28:51.650 Firmware Activation Notices: Not Supported 00:28:51.650 ANA Change Notices: Supported 00:28:51.650 PLE Aggregate Log Change Notices: Not Supported 00:28:51.650 LBA Status Info Alert Notices: Not Supported 00:28:51.650 EGE Aggregate Log Change Notices: Not Supported 00:28:51.650 Normal NVM Subsystem Shutdown event: Not Supported 00:28:51.650 Zone Descriptor Change Notices: Not Supported 00:28:51.650 Discovery Log Change Notices: Not Supported 00:28:51.650 Controller Attributes 00:28:51.650 128-bit Host Identifier: Supported 00:28:51.650 Non-Operational Permissive Mode: Not Supported 00:28:51.650 NVM Sets: Not Supported 00:28:51.650 Read Recovery Levels: Not Supported 00:28:51.650 Endurance Groups: Not Supported 00:28:51.650 Predictable Latency Mode: Not Supported 00:28:51.650 Traffic Based Keep ALive: Supported 00:28:51.650 Namespace Granularity: Not Supported 00:28:51.650 SQ Associations: Not Supported 00:28:51.650 UUID List: Not Supported 00:28:51.650 Multi-Domain Subsystem: Not Supported 00:28:51.650 Fixed Capacity Management: Not Supported 00:28:51.650 Variable Capacity Management: Not Supported 00:28:51.650 Delete Endurance Group: Not Supported 00:28:51.650 Delete NVM Set: Not Supported 00:28:51.650 Extended LBA Formats Supported: Not Supported 00:28:51.650 Flexible Data Placement Supported: Not Supported 00:28:51.650 00:28:51.650 Controller Memory Buffer Support 00:28:51.650 ================================ 00:28:51.650 Supported: No 00:28:51.650 00:28:51.650 Persistent Memory Region Support 00:28:51.650 ================================ 00:28:51.650 Supported: No 00:28:51.650 00:28:51.650 Admin Command Set Attributes 00:28:51.650 ============================ 00:28:51.650 Security Send/Receive: Not Supported 00:28:51.650 Format NVM: Not Supported 00:28:51.650 Firmware Activate/Download: Not Supported 00:28:51.650 Namespace Management: Not Supported 00:28:51.650 Device Self-Test: Not Supported 00:28:51.650 Directives: Not Supported 00:28:51.650 NVMe-MI: Not Supported 00:28:51.650 Virtualization Management: Not Supported 00:28:51.650 Doorbell Buffer Config: Not Supported 00:28:51.650 Get LBA Status Capability: Not Supported 00:28:51.650 Command & Feature Lockdown Capability: Not Supported 00:28:51.650 Abort Command Limit: 4 00:28:51.650 Async Event Request Limit: 4 00:28:51.650 Number of Firmware Slots: N/A 00:28:51.650 Firmware Slot 1 Read-Only: N/A 00:28:51.650 Firmware Activation Without Reset: N/A 00:28:51.650 Multiple Update Detection Support: N/A 00:28:51.650 Firmware Update Granularity: No Information Provided 00:28:51.650 Per-Namespace SMART Log: Yes 00:28:51.650 Asymmetric Namespace Access Log Page: Supported 00:28:51.650 ANA Transition Time : 10 sec 00:28:51.650 00:28:51.650 Asymmetric Namespace Access Capabilities 00:28:51.650 ANA Optimized State : Supported 00:28:51.650 ANA Non-Optimized State : Supported 00:28:51.650 ANA Inaccessible State : Supported 00:28:51.650 ANA Persistent Loss State : Supported 00:28:51.650 ANA Change State : Supported 00:28:51.650 ANAGRPID is not changed : No 00:28:51.650 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:51.650 00:28:51.650 ANA Group Identifier Maximum : 128 00:28:51.650 Number of ANA Group Identifiers : 128 00:28:51.650 Max Number of Allowed Namespaces : 1024 00:28:51.650 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:51.650 Command Effects Log Page: Supported 00:28:51.650 Get Log Page Extended Data: Supported 00:28:51.650 Telemetry Log Pages: Not Supported 00:28:51.650 Persistent Event Log Pages: Not Supported 00:28:51.650 Supported Log Pages Log Page: May Support 00:28:51.650 Commands Supported & Effects Log Page: Not Supported 00:28:51.650 Feature Identifiers & Effects Log Page:May Support 00:28:51.650 NVMe-MI Commands & Effects Log Page: May Support 00:28:51.650 Data Area 4 for Telemetry Log: Not Supported 00:28:51.650 Error Log Page Entries Supported: 128 00:28:51.650 Keep Alive: Supported 00:28:51.650 Keep Alive Granularity: 1000 ms 00:28:51.650 00:28:51.650 NVM Command Set Attributes 00:28:51.650 ========================== 00:28:51.650 Submission Queue Entry Size 00:28:51.650 Max: 64 00:28:51.650 Min: 64 00:28:51.650 Completion Queue Entry Size 00:28:51.650 Max: 16 00:28:51.650 Min: 16 00:28:51.650 Number of Namespaces: 1024 00:28:51.650 Compare Command: Not Supported 00:28:51.650 Write Uncorrectable Command: Not Supported 00:28:51.650 Dataset Management Command: Supported 00:28:51.650 Write Zeroes Command: Supported 00:28:51.650 Set Features Save Field: Not Supported 00:28:51.650 Reservations: Not Supported 00:28:51.650 Timestamp: Not Supported 00:28:51.650 Copy: Not Supported 00:28:51.650 Volatile Write Cache: Present 00:28:51.650 Atomic Write Unit (Normal): 1 00:28:51.650 Atomic Write Unit (PFail): 1 00:28:51.650 Atomic Compare & Write Unit: 1 00:28:51.650 Fused Compare & Write: Not Supported 00:28:51.650 Scatter-Gather List 00:28:51.650 SGL Command Set: Supported 00:28:51.650 SGL Keyed: Not Supported 00:28:51.650 SGL Bit Bucket Descriptor: Not Supported 00:28:51.650 SGL Metadata Pointer: Not Supported 00:28:51.650 Oversized SGL: Not Supported 00:28:51.650 SGL Metadata Address: Not Supported 00:28:51.650 SGL Offset: Supported 00:28:51.650 Transport SGL Data Block: Not Supported 00:28:51.650 Replay Protected Memory Block: Not Supported 00:28:51.650 00:28:51.650 Firmware Slot Information 00:28:51.650 ========================= 00:28:51.650 Active slot: 0 00:28:51.650 00:28:51.650 Asymmetric Namespace Access 00:28:51.650 =========================== 00:28:51.650 Change Count : 0 00:28:51.650 Number of ANA Group Descriptors : 1 00:28:51.650 ANA Group Descriptor : 0 00:28:51.650 ANA Group ID : 1 00:28:51.650 Number of NSID Values : 1 00:28:51.650 Change Count : 0 00:28:51.650 ANA State : 1 00:28:51.650 Namespace Identifier : 1 00:28:51.650 00:28:51.650 Commands Supported and Effects 00:28:51.650 ============================== 00:28:51.650 Admin Commands 00:28:51.650 -------------- 00:28:51.650 Get Log Page (02h): Supported 00:28:51.650 Identify (06h): Supported 00:28:51.650 Abort (08h): Supported 00:28:51.650 Set Features (09h): Supported 00:28:51.650 Get Features (0Ah): Supported 00:28:51.650 Asynchronous Event Request (0Ch): Supported 00:28:51.650 Keep Alive (18h): Supported 00:28:51.650 I/O Commands 00:28:51.650 ------------ 00:28:51.650 Flush (00h): Supported 00:28:51.650 Write (01h): Supported LBA-Change 00:28:51.650 Read (02h): Supported 00:28:51.650 Write Zeroes (08h): Supported LBA-Change 00:28:51.650 Dataset Management (09h): Supported 00:28:51.650 00:28:51.650 Error Log 00:28:51.650 ========= 00:28:51.650 Entry: 0 00:28:51.650 Error Count: 0x3 00:28:51.650 Submission Queue Id: 0x0 00:28:51.650 Command Id: 0x5 00:28:51.650 Phase Bit: 0 00:28:51.650 Status Code: 0x2 00:28:51.650 Status Code Type: 0x0 00:28:51.650 Do Not Retry: 1 00:28:51.650 Error Location: 0x28 00:28:51.650 LBA: 0x0 00:28:51.650 Namespace: 0x0 00:28:51.650 Vendor Log Page: 0x0 00:28:51.650 ----------- 00:28:51.650 Entry: 1 00:28:51.650 Error Count: 0x2 00:28:51.650 Submission Queue Id: 0x0 00:28:51.650 Command Id: 0x5 00:28:51.650 Phase Bit: 0 00:28:51.650 Status Code: 0x2 00:28:51.650 Status Code Type: 0x0 00:28:51.650 Do Not Retry: 1 00:28:51.650 Error Location: 0x28 00:28:51.650 LBA: 0x0 00:28:51.650 Namespace: 0x0 00:28:51.650 Vendor Log Page: 0x0 00:28:51.650 ----------- 00:28:51.650 Entry: 2 00:28:51.650 Error Count: 0x1 00:28:51.650 Submission Queue Id: 0x0 00:28:51.650 Command Id: 0x4 00:28:51.650 Phase Bit: 0 00:28:51.650 Status Code: 0x2 00:28:51.650 Status Code Type: 0x0 00:28:51.650 Do Not Retry: 1 00:28:51.650 Error Location: 0x28 00:28:51.650 LBA: 0x0 00:28:51.650 Namespace: 0x0 00:28:51.650 Vendor Log Page: 0x0 00:28:51.650 00:28:51.650 Number of Queues 00:28:51.650 ================ 00:28:51.650 Number of I/O Submission Queues: 128 00:28:51.650 Number of I/O Completion Queues: 128 00:28:51.650 00:28:51.650 ZNS Specific Controller Data 00:28:51.650 ============================ 00:28:51.650 Zone Append Size Limit: 0 00:28:51.650 00:28:51.650 00:28:51.650 Active Namespaces 00:28:51.650 ================= 00:28:51.650 get_feature(0x05) failed 00:28:51.650 Namespace ID:1 00:28:51.650 Command Set Identifier: NVM (00h) 00:28:51.650 Deallocate: Supported 00:28:51.650 Deallocated/Unwritten Error: Not Supported 00:28:51.650 Deallocated Read Value: Unknown 00:28:51.651 Deallocate in Write Zeroes: Not Supported 00:28:51.651 Deallocated Guard Field: 0xFFFF 00:28:51.651 Flush: Supported 00:28:51.651 Reservation: Not Supported 00:28:51.651 Namespace Sharing Capabilities: Multiple Controllers 00:28:51.651 Size (in LBAs): 3750748848 (1788GiB) 00:28:51.651 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:51.651 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:51.651 UUID: 683f9a9d-a896-4f84-9ae1-10b56d61c853 00:28:51.651 Thin Provisioning: Not Supported 00:28:51.651 Per-NS Atomic Units: Yes 00:28:51.651 Atomic Write Unit (Normal): 8 00:28:51.651 Atomic Write Unit (PFail): 8 00:28:51.651 Preferred Write Granularity: 8 00:28:51.651 Atomic Compare & Write Unit: 8 00:28:51.651 Atomic Boundary Size (Normal): 0 00:28:51.651 Atomic Boundary Size (PFail): 0 00:28:51.651 Atomic Boundary Offset: 0 00:28:51.651 NGUID/EUI64 Never Reused: No 00:28:51.651 ANA group ID: 1 00:28:51.651 Namespace Write Protected: No 00:28:51.651 Number of LBA Formats: 1 00:28:51.651 Current LBA Format: LBA Format #00 00:28:51.651 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:51.651 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.651 rmmod nvme_tcp 00:28:51.651 rmmod nvme_fabrics 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.651 10:01:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.200 10:01:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:54.200 10:01:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:54.200 10:01:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:54.200 10:01:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:54.200 10:01:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:54.201 10:01:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:54.201 10:01:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:54.201 10:01:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:54.201 10:01:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:54.201 10:01:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:54.201 10:01:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:57.504 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:57.504 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:57.766 00:28:57.766 real 0m19.679s 00:28:57.766 user 0m5.353s 00:28:57.766 sys 0m11.343s 00:28:57.766 10:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:57.766 10:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.766 ************************************ 00:28:57.766 END TEST nvmf_identify_kernel_target 00:28:57.766 ************************************ 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.028 ************************************ 00:28:58.028 START TEST nvmf_auth_host 00:28:58.028 ************************************ 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:58.028 * Looking for test storage... 00:28:58.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.028 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:58.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.291 --rc genhtml_branch_coverage=1 00:28:58.291 --rc genhtml_function_coverage=1 00:28:58.291 --rc genhtml_legend=1 00:28:58.291 --rc geninfo_all_blocks=1 00:28:58.291 --rc geninfo_unexecuted_blocks=1 00:28:58.291 00:28:58.291 ' 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:58.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.291 --rc genhtml_branch_coverage=1 00:28:58.291 --rc genhtml_function_coverage=1 00:28:58.291 --rc genhtml_legend=1 00:28:58.291 --rc geninfo_all_blocks=1 00:28:58.291 --rc geninfo_unexecuted_blocks=1 00:28:58.291 00:28:58.291 ' 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:58.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.291 --rc genhtml_branch_coverage=1 00:28:58.291 --rc genhtml_function_coverage=1 00:28:58.291 --rc genhtml_legend=1 00:28:58.291 --rc geninfo_all_blocks=1 00:28:58.291 --rc geninfo_unexecuted_blocks=1 00:28:58.291 00:28:58.291 ' 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:58.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.291 --rc genhtml_branch_coverage=1 00:28:58.291 --rc genhtml_function_coverage=1 00:28:58.291 --rc genhtml_legend=1 00:28:58.291 --rc geninfo_all_blocks=1 00:28:58.291 --rc geninfo_unexecuted_blocks=1 00:28:58.291 00:28:58.291 ' 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.291 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.292 10:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:06.442 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:06.442 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.442 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:06.443 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:06.443 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:29:06.443 00:29:06.443 --- 10.0.0.2 ping statistics --- 00:29:06.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.443 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:29:06.443 00:29:06.443 --- 10.0.0.1 ping statistics --- 00:29:06.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.443 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:06.443 10:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=4036316 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 4036316 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 4036316 ']' 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.443 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9ccad2cf84f1fa2faa0043b9e3b11754 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.z5Y 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9ccad2cf84f1fa2faa0043b9e3b11754 0 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9ccad2cf84f1fa2faa0043b9e3b11754 0 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9ccad2cf84f1fa2faa0043b9e3b11754 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:06.706 10:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.z5Y 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.z5Y 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.z5Y 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e7cd4e445caae165a296e3c9d49e05f9e0fad7963eb0e37e1fd97c2de4ae6f88 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3Go 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e7cd4e445caae165a296e3c9d49e05f9e0fad7963eb0e37e1fd97c2de4ae6f88 3 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e7cd4e445caae165a296e3c9d49e05f9e0fad7963eb0e37e1fd97c2de4ae6f88 3 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e7cd4e445caae165a296e3c9d49e05f9e0fad7963eb0e37e1fd97c2de4ae6f88 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3Go 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3Go 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.3Go 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a3d7bc3c961d70673f6eb1d2ed9f926df68309ff7573c1bf 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.gFl 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a3d7bc3c961d70673f6eb1d2ed9f926df68309ff7573c1bf 0 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a3d7bc3c961d70673f6eb1d2ed9f926df68309ff7573c1bf 0 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a3d7bc3c961d70673f6eb1d2ed9f926df68309ff7573c1bf 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.gFl 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.gFl 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.gFl 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:06.706 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2a21d078361e3ef37babebdcb619330ba06cbf9ba3a08a49 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.LLn 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2a21d078361e3ef37babebdcb619330ba06cbf9ba3a08a49 2 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2a21d078361e3ef37babebdcb619330ba06cbf9ba3a08a49 2 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2a21d078361e3ef37babebdcb619330ba06cbf9ba3a08a49 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.LLn 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.LLn 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.LLn 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=19b2b4d1b7f9683241ee6adb74f218a1 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.yOh 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 19b2b4d1b7f9683241ee6adb74f218a1 1 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 19b2b4d1b7f9683241ee6adb74f218a1 1 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=19b2b4d1b7f9683241ee6adb74f218a1 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.yOh 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.yOh 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.yOh 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2a6bb5d7e7d859c26567f8fd66bbb632 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6Tr 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2a6bb5d7e7d859c26567f8fd66bbb632 1 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2a6bb5d7e7d859c26567f8fd66bbb632 1 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2a6bb5d7e7d859c26567f8fd66bbb632 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6Tr 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6Tr 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.6Tr 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f9fe15b99ead0cd8993615e1d5e25758506c347558a21ea8 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.auu 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f9fe15b99ead0cd8993615e1d5e25758506c347558a21ea8 2 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f9fe15b99ead0cd8993615e1d5e25758506c347558a21ea8 2 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f9fe15b99ead0cd8993615e1d5e25758506c347558a21ea8 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:06.969 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.auu 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.auu 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.auu 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3ad3bcd7be638c8d51370bec556d405d 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PhH 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3ad3bcd7be638c8d51370bec556d405d 0 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3ad3bcd7be638c8d51370bec556d405d 0 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3ad3bcd7be638c8d51370bec556d405d 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PhH 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PhH 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.PhH 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ca1153dbeb94f7fd592394a679166301b83d44cff6dfed6988cf3245ebc97448 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Je7 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ca1153dbeb94f7fd592394a679166301b83d44cff6dfed6988cf3245ebc97448 3 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ca1153dbeb94f7fd592394a679166301b83d44cff6dfed6988cf3245ebc97448 3 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ca1153dbeb94f7fd592394a679166301b83d44cff6dfed6988cf3245ebc97448 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Je7 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Je7 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Je7 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4036316 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 4036316 ']' 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.232 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.z5Y 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.3Go ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3Go 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.gFl 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.LLn ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LLn 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.yOh 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.6Tr ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Tr 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.auu 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.PhH ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.PhH 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Je7 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:29:07.494 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:07.495 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:07.495 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:07.495 10:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:10.799 Waiting for block devices as requested 00:29:11.060 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:11.060 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:11.060 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:11.320 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:11.320 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:11.321 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:11.581 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:11.581 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:11.581 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:11.841 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:11.841 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:11.841 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:12.102 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:12.102 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:12.102 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:12.102 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:12.362 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:13.304 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:13.304 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:13.304 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:13.304 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:13.305 No valid GPT data, bailing 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:29:13.305 00:29:13.305 Discovery Log Number of Records 2, Generation counter 2 00:29:13.305 =====Discovery Log Entry 0====== 00:29:13.305 trtype: tcp 00:29:13.305 adrfam: ipv4 00:29:13.305 subtype: current discovery subsystem 00:29:13.305 treq: not specified, sq flow control disable supported 00:29:13.305 portid: 1 00:29:13.305 trsvcid: 4420 00:29:13.305 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:13.305 traddr: 10.0.0.1 00:29:13.305 eflags: none 00:29:13.305 sectype: none 00:29:13.305 =====Discovery Log Entry 1====== 00:29:13.305 trtype: tcp 00:29:13.305 adrfam: ipv4 00:29:13.305 subtype: nvme subsystem 00:29:13.305 treq: not specified, sq flow control disable supported 00:29:13.305 portid: 1 00:29:13.305 trsvcid: 4420 00:29:13.305 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:13.305 traddr: 10.0.0.1 00:29:13.305 eflags: none 00:29:13.305 sectype: none 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.305 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.567 nvme0n1 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.567 10:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.829 nvme0n1 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.829 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.089 nvme0n1 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.089 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.090 nvme0n1 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.090 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.351 nvme0n1 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.351 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.613 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.614 nvme0n1 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.614 10:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.614 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.876 nvme0n1 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.876 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.877 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.877 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.877 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.877 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.877 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.877 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.877 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.877 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.877 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.877 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:14.877 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.877 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.138 nvme0n1 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.138 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.139 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.139 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:15.139 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.139 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.400 nvme0n1 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.400 10:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.662 nvme0n1 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.662 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.663 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.663 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.663 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.663 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.663 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:15.663 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.663 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.923 nvme0n1 00:29:15.923 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.923 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.923 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.923 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.923 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.923 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.923 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.923 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.923 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.923 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.923 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.924 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.184 nvme0n1 00:29:16.184 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.184 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.184 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.184 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.184 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.443 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.444 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.704 nvme0n1 00:29:16.704 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.704 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.704 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.704 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.704 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.704 10:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.704 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.965 nvme0n1 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.965 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.225 nvme0n1 00:29:17.225 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.225 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.225 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.225 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.225 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.225 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.485 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.486 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.747 nvme0n1 00:29:17.747 10:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.747 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.319 nvme0n1 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.319 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.581 nvme0n1 00:29:18.581 10:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.581 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.581 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.581 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.581 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.581 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.581 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.581 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.581 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.581 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.842 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.104 nvme0n1 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.104 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.365 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.627 nvme0n1 00:29:19.627 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.627 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.627 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.627 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.627 10:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.627 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.199 nvme0n1 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:20.199 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.200 10:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.771 nvme0n1 00:29:20.771 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.771 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.771 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.771 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.771 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.771 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:21.032 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.033 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.603 nvme0n1 00:29:21.603 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.603 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.603 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.603 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.603 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.603 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.603 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.603 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.604 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.604 10:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.604 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.546 nvme0n1 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.546 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.547 10:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.119 nvme0n1 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.119 10:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.689 nvme0n1 00:29:23.690 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.690 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.690 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.690 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.690 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.690 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.949 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.950 nvme0n1 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.950 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.211 nvme0n1 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.211 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.212 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.473 nvme0n1 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.473 10:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.735 nvme0n1 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.735 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.997 nvme0n1 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.997 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.259 nvme0n1 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.259 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.520 nvme0n1 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.520 10:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.781 nvme0n1 00:29:25.781 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.781 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.781 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.781 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.781 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.781 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.781 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.781 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.781 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.782 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.042 nvme0n1 00:29:26.042 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.042 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.042 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.042 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.042 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.042 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.042 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.043 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.303 nvme0n1 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.303 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.564 nvme0n1 00:29:26.564 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.564 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.564 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.564 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.564 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.564 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.564 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.564 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.564 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.564 10:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.564 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.824 nvme0n1 00:29:26.825 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.825 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.825 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.825 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.825 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.086 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.347 nvme0n1 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.347 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.348 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.348 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.348 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.348 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.348 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:27.348 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.348 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.608 nvme0n1 00:29:27.608 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.608 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.608 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.608 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.608 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.608 10:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.608 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.609 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:27.609 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.609 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.869 nvme0n1 00:29:27.869 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.869 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.869 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.869 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.869 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.869 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.130 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.390 nvme0n1 00:29:28.390 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.390 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.390 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.390 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.390 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.390 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.390 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.390 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.390 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.390 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.390 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.651 10:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.911 nvme0n1 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.911 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:28.912 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:28.912 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:29.172 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:29.172 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.172 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.439 nvme0n1 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.439 10:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.093 nvme0n1 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.093 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.094 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.364 nvme0n1 00:29:30.364 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.364 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.364 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.364 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.364 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.364 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.364 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.364 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.364 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.364 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.624 10:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.196 nvme0n1 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.196 10:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.768 nvme0n1 00:29:31.768 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.768 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.768 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.768 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.768 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.029 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.600 nvme0n1 00:29:32.600 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.600 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.600 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.600 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.600 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.600 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.600 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.600 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.600 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.600 10:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:32.600 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.601 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.547 nvme0n1 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.547 10:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.120 nvme0n1 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.120 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.381 nvme0n1 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.382 nvme0n1 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.382 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.645 10:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.645 nvme0n1 00:29:34.645 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.645 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.645 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.645 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.645 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.645 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.907 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.908 nvme0n1 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:34.908 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:35.170 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.171 nvme0n1 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.171 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.433 nvme0n1 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.433 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:35.434 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:35.434 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:35.434 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.434 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.434 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:35.434 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.434 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:35.434 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:35.434 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:35.434 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:35.434 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.434 10:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.695 nvme0n1 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.695 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.958 nvme0n1 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.958 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.219 nvme0n1 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:36.219 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:36.220 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:36.220 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.220 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.480 nvme0n1 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.480 10:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.741 nvme0n1 00:29:36.741 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.741 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.741 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.741 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.741 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.001 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.001 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.001 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.001 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.001 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.001 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.001 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.001 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:37.001 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.001 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.001 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.002 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.263 nvme0n1 00:29:37.263 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.263 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.263 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.263 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.263 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.263 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.263 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.263 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.263 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.263 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.263 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.263 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.264 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.524 nvme0n1 00:29:37.524 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.524 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.524 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.524 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.524 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.524 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.524 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.524 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.524 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.524 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.525 10:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.786 nvme0n1 00:29:37.786 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.786 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.786 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.786 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.786 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.786 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.047 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.308 nvme0n1 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:38.308 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.309 10:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.880 nvme0n1 00:29:38.880 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.881 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.143 nvme0n1 00:29:39.143 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.143 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.143 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.143 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.143 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.143 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.143 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.143 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.143 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.143 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.404 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.404 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.404 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:39.404 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.404 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.404 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:39.404 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:39.404 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:39.404 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:39.404 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.405 10:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.665 nvme0n1 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.665 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.926 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.188 nvme0n1 00:29:40.188 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.188 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.188 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.189 10:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.761 nvme0n1 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNjYWQyY2Y4NGYxZmEyZmFhMDA0M2I5ZTNiMTE3NTRFzsRn: 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: ]] 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTdjZDRlNDQ1Y2FhZTE2NWEyOTZlM2M5ZDQ5ZTA1ZjllMGZhZDc5NjNlYjBlMzdlMWZkOTdjMmRlNGFlNmY4OFlpDdM=: 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.761 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.331 nvme0n1 00:29:41.331 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.331 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.331 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.331 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.331 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.331 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:41.591 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.592 10:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.162 nvme0n1 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.162 10:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.101 nvme0n1 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjlmZTE1Yjk5ZWFkMGNkODk5MzYxNWUxZDVlMjU3NTg1MDZjMzQ3NTU4YTIxZWE4UIEjJg==: 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: ]] 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2FkM2JjZDdiZTYzOGM4ZDUxMzcwYmVjNTU2ZDQwNWTOZnI7: 00:29:43.101 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.102 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.673 nvme0n1 00:29:43.673 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.673 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.673 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.673 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.673 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.673 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.673 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.673 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.673 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.673 10:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2ExMTUzZGJlYjk0ZjdmZDU5MjM5NGE2NzkxNjYzMDFiODNkNDRjZmY2ZGZlZDY5ODhjZjMyNDVlYmM5NzQ0OKRTwl0=: 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.673 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.259 nvme0n1 00:29:44.259 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.259 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.259 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.259 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.259 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.259 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.259 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.259 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.259 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.259 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.521 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.521 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:44.521 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.521 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:44.521 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.522 request: 00:29:44.522 { 00:29:44.522 "name": "nvme0", 00:29:44.522 "trtype": "tcp", 00:29:44.522 "traddr": "10.0.0.1", 00:29:44.522 "adrfam": "ipv4", 00:29:44.522 "trsvcid": "4420", 00:29:44.522 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:44.522 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:44.522 "prchk_reftag": false, 00:29:44.522 "prchk_guard": false, 00:29:44.522 "hdgst": false, 00:29:44.522 "ddgst": false, 00:29:44.522 "allow_unrecognized_csi": false, 00:29:44.522 "method": "bdev_nvme_attach_controller", 00:29:44.522 "req_id": 1 00:29:44.522 } 00:29:44.522 Got JSON-RPC error response 00:29:44.522 response: 00:29:44.522 { 00:29:44.522 "code": -5, 00:29:44.522 "message": "Input/output error" 00:29:44.522 } 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.522 request: 00:29:44.522 { 00:29:44.522 "name": "nvme0", 00:29:44.522 "trtype": "tcp", 00:29:44.522 "traddr": "10.0.0.1", 00:29:44.522 "adrfam": "ipv4", 00:29:44.522 "trsvcid": "4420", 00:29:44.522 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:44.522 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:44.522 "prchk_reftag": false, 00:29:44.522 "prchk_guard": false, 00:29:44.522 "hdgst": false, 00:29:44.522 "ddgst": false, 00:29:44.522 "dhchap_key": "key2", 00:29:44.522 "allow_unrecognized_csi": false, 00:29:44.522 "method": "bdev_nvme_attach_controller", 00:29:44.522 "req_id": 1 00:29:44.522 } 00:29:44.522 Got JSON-RPC error response 00:29:44.522 response: 00:29:44.522 { 00:29:44.522 "code": -5, 00:29:44.522 "message": "Input/output error" 00:29:44.522 } 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.522 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.523 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:44.523 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:44.523 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:44.523 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:44.523 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.523 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:44.523 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.523 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:44.523 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.523 10:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.783 request: 00:29:44.783 { 00:29:44.783 "name": "nvme0", 00:29:44.783 "trtype": "tcp", 00:29:44.783 "traddr": "10.0.0.1", 00:29:44.783 "adrfam": "ipv4", 00:29:44.783 "trsvcid": "4420", 00:29:44.783 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:44.783 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:44.783 "prchk_reftag": false, 00:29:44.783 "prchk_guard": false, 00:29:44.783 "hdgst": false, 00:29:44.783 "ddgst": false, 00:29:44.783 "dhchap_key": "key1", 00:29:44.783 "dhchap_ctrlr_key": "ckey2", 00:29:44.783 "allow_unrecognized_csi": false, 00:29:44.783 "method": "bdev_nvme_attach_controller", 00:29:44.783 "req_id": 1 00:29:44.783 } 00:29:44.783 Got JSON-RPC error response 00:29:44.783 response: 00:29:44.783 { 00:29:44.783 "code": -5, 00:29:44.783 "message": "Input/output error" 00:29:44.783 } 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.783 nvme0n1 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.783 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.043 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.043 request: 00:29:45.043 { 00:29:45.044 "name": "nvme0", 00:29:45.044 "dhchap_key": "key1", 00:29:45.044 "dhchap_ctrlr_key": "ckey2", 00:29:45.044 "method": "bdev_nvme_set_keys", 00:29:45.044 "req_id": 1 00:29:45.044 } 00:29:45.044 Got JSON-RPC error response 00:29:45.044 response: 00:29:45.044 { 00:29:45.044 "code": -13, 00:29:45.044 "message": "Permission denied" 00:29:45.044 } 00:29:45.044 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:45.044 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:45.044 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:45.044 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:45.044 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:45.044 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.044 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:45.044 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.044 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.044 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.044 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:45.044 10:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:45.987 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.987 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:45.987 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.987 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.247 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.247 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:46.247 10:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNkN2JjM2M5NjFkNzA2NzNmNmViMWQyZWQ5ZjkyNmRmNjgzMDlmZjc1NzNjMWJm2cD3fw==: 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: ]] 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmEyMWQwNzgzNjFlM2VmMzdiYWJlYmRjYjYxOTMzMGJhMDZjYmY5YmEzYTA4YTQ5dHvj7A==: 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:47.186 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:47.187 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.187 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.448 nvme0n1 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTliMmI0ZDFiN2Y5NjgzMjQxZWU2YWRiNzRmMjE4YTGDvue5: 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: ]] 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2YmI1ZDdlN2Q4NTljMjY1NjdmOGZkNjZiYmI2MzKdGMSr: 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.448 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.449 request: 00:29:47.449 { 00:29:47.449 "name": "nvme0", 00:29:47.449 "dhchap_key": "key2", 00:29:47.449 "dhchap_ctrlr_key": "ckey1", 00:29:47.449 "method": "bdev_nvme_set_keys", 00:29:47.449 "req_id": 1 00:29:47.449 } 00:29:47.449 Got JSON-RPC error response 00:29:47.449 response: 00:29:47.449 { 00:29:47.449 "code": -13, 00:29:47.449 "message": "Permission denied" 00:29:47.449 } 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:47.449 10:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:48.388 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.388 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:48.388 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.388 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.388 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.648 rmmod nvme_tcp 00:29:48.648 rmmod nvme_fabrics 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 4036316 ']' 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 4036316 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 4036316 ']' 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 4036316 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.648 10:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4036316 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4036316' 00:29:48.648 killing process with pid 4036316 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 4036316 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 4036316 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.648 10:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.193 10:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:51.193 10:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:51.193 10:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:51.193 10:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:51.193 10:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:51.193 10:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:51.193 10:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:51.193 10:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:51.193 10:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:51.193 10:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:51.193 10:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:51.193 10:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:51.193 10:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:54.492 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:54.492 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:54.752 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:55.012 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.z5Y /tmp/spdk.key-null.gFl /tmp/spdk.key-sha256.yOh /tmp/spdk.key-sha384.auu /tmp/spdk.key-sha512.Je7 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:55.012 10:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:58.313 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:58.313 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:58.313 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:58.313 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:58.313 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:58.313 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:58.313 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:58.313 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:58.313 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:58.313 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:58.313 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:58.313 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:58.313 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:58.573 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:58.573 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:58.573 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:58.573 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:58.835 00:29:58.835 real 1m0.863s 00:29:58.835 user 0m54.664s 00:29:58.835 sys 0m16.086s 00:29:58.835 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:58.835 10:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.835 ************************************ 00:29:58.835 END TEST nvmf_auth_host 00:29:58.835 ************************************ 00:29:58.835 10:02:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:58.835 10:02:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:58.835 10:02:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:58.835 10:02:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.835 10:02:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.835 ************************************ 00:29:58.835 START TEST nvmf_digest 00:29:58.835 ************************************ 00:29:58.835 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:59.096 * Looking for test storage... 00:29:59.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:59.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.096 --rc genhtml_branch_coverage=1 00:29:59.096 --rc genhtml_function_coverage=1 00:29:59.096 --rc genhtml_legend=1 00:29:59.096 --rc geninfo_all_blocks=1 00:29:59.096 --rc geninfo_unexecuted_blocks=1 00:29:59.096 00:29:59.096 ' 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:59.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.096 --rc genhtml_branch_coverage=1 00:29:59.096 --rc genhtml_function_coverage=1 00:29:59.096 --rc genhtml_legend=1 00:29:59.096 --rc geninfo_all_blocks=1 00:29:59.096 --rc geninfo_unexecuted_blocks=1 00:29:59.096 00:29:59.096 ' 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:59.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.096 --rc genhtml_branch_coverage=1 00:29:59.096 --rc genhtml_function_coverage=1 00:29:59.096 --rc genhtml_legend=1 00:29:59.096 --rc geninfo_all_blocks=1 00:29:59.096 --rc geninfo_unexecuted_blocks=1 00:29:59.096 00:29:59.096 ' 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:59.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.096 --rc genhtml_branch_coverage=1 00:29:59.096 --rc genhtml_function_coverage=1 00:29:59.096 --rc genhtml_legend=1 00:29:59.096 --rc geninfo_all_blocks=1 00:29:59.096 --rc geninfo_unexecuted_blocks=1 00:29:59.096 00:29:59.096 ' 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.096 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:59.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:59.097 10:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:07.240 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.240 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:30:07.240 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:07.240 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:07.240 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:07.240 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:07.240 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:07.241 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:07.241 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:07.241 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:07.241 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:07.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:30:07.241 00:30:07.241 --- 10.0.0.2 ping statistics --- 00:30:07.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.241 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:07.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:30:07.241 00:30:07.241 --- 10.0.0.1 ping statistics --- 00:30:07.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.241 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:07.241 10:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:07.241 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:07.241 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:07.241 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:07.241 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:07.241 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.241 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:07.241 ************************************ 00:30:07.242 START TEST nvmf_digest_clean 00:30:07.242 ************************************ 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=4053306 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 4053306 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4053306 ']' 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:07.242 [2024-11-27 10:02:22.126231] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:07.242 [2024-11-27 10:02:22.126293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.242 [2024-11-27 10:02:22.200975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.242 [2024-11-27 10:02:22.246137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.242 [2024-11-27 10:02:22.246195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.242 [2024-11-27 10:02:22.246202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.242 [2024-11-27 10:02:22.246210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.242 [2024-11-27 10:02:22.246215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.242 [2024-11-27 10:02:22.246911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:07.242 null0 00:30:07.242 [2024-11-27 10:02:22.458431] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.242 [2024-11-27 10:02:22.482747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4053326 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4053326 /var/tmp/bperf.sock 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4053326 ']' 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:07.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.242 10:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:07.242 [2024-11-27 10:02:22.544142] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:07.242 [2024-11-27 10:02:22.544214] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4053326 ] 00:30:07.242 [2024-11-27 10:02:22.635620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.242 [2024-11-27 10:02:22.688266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.185 10:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:08.185 10:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:08.185 10:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:08.185 10:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:08.185 10:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:08.185 10:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:08.185 10:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:08.758 nvme0n1 00:30:08.758 10:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:08.758 10:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:08.758 Running I/O for 2 seconds... 00:30:11.089 20385.00 IOPS, 79.63 MiB/s [2024-11-27T09:02:26.555Z] 20658.50 IOPS, 80.70 MiB/s 00:30:11.089 Latency(us) 00:30:11.089 [2024-11-27T09:02:26.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.089 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:11.089 nvme0n1 : 2.01 20679.24 80.78 0.00 0.00 6181.20 3208.53 15510.19 00:30:11.089 [2024-11-27T09:02:26.555Z] =================================================================================================================== 00:30:11.089 [2024-11-27T09:02:26.555Z] Total : 20679.24 80.78 0.00 0.00 6181.20 3208.53 15510.19 00:30:11.089 { 00:30:11.089 "results": [ 00:30:11.089 { 00:30:11.089 "job": "nvme0n1", 00:30:11.089 "core_mask": "0x2", 00:30:11.089 "workload": "randread", 00:30:11.089 "status": "finished", 00:30:11.089 "queue_depth": 128, 00:30:11.089 "io_size": 4096, 00:30:11.089 "runtime": 2.005828, 00:30:11.089 "iops": 20679.2406926217, 00:30:11.089 "mibps": 80.77828395555352, 00:30:11.089 "io_failed": 0, 00:30:11.089 "io_timeout": 0, 00:30:11.089 "avg_latency_us": 6181.20392616344, 00:30:11.089 "min_latency_us": 3208.5333333333333, 00:30:11.089 "max_latency_us": 15510.186666666666 00:30:11.089 } 00:30:11.089 ], 00:30:11.089 "core_count": 1 00:30:11.089 } 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:11.089 | select(.opcode=="crc32c") 00:30:11.089 | "\(.module_name) \(.executed)"' 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4053326 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4053326 ']' 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4053326 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4053326 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4053326' 00:30:11.089 killing process with pid 4053326 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4053326 00:30:11.089 Received shutdown signal, test time was about 2.000000 seconds 00:30:11.089 00:30:11.089 Latency(us) 00:30:11.089 [2024-11-27T09:02:26.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.089 [2024-11-27T09:02:26.555Z] =================================================================================================================== 00:30:11.089 [2024-11-27T09:02:26.555Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4053326 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4054032 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4054032 /var/tmp/bperf.sock 00:30:11.089 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4054032 ']' 00:30:11.090 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:11.090 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:11.090 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.090 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:11.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:11.090 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.090 10:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:11.350 [2024-11-27 10:02:26.563563] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:11.350 [2024-11-27 10:02:26.563622] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4054032 ] 00:30:11.350 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:11.350 Zero copy mechanism will not be used. 00:30:11.350 [2024-11-27 10:02:26.646625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.350 [2024-11-27 10:02:26.676243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.921 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.921 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:11.921 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:11.921 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:11.921 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:12.181 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:12.181 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:12.749 nvme0n1 00:30:12.749 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:12.749 10:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:12.749 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:12.749 Zero copy mechanism will not be used. 00:30:12.749 Running I/O for 2 seconds... 00:30:14.630 3100.00 IOPS, 387.50 MiB/s [2024-11-27T09:02:30.096Z] 3206.50 IOPS, 400.81 MiB/s 00:30:14.630 Latency(us) 00:30:14.630 [2024-11-27T09:02:30.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.630 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:14.630 nvme0n1 : 2.00 3209.53 401.19 0.00 0.00 4982.81 590.51 9448.11 00:30:14.630 [2024-11-27T09:02:30.096Z] =================================================================================================================== 00:30:14.630 [2024-11-27T09:02:30.096Z] Total : 3209.53 401.19 0.00 0.00 4982.81 590.51 9448.11 00:30:14.630 { 00:30:14.630 "results": [ 00:30:14.630 { 00:30:14.630 "job": "nvme0n1", 00:30:14.630 "core_mask": "0x2", 00:30:14.630 "workload": "randread", 00:30:14.630 "status": "finished", 00:30:14.630 "queue_depth": 16, 00:30:14.630 "io_size": 131072, 00:30:14.630 "runtime": 2.003097, 00:30:14.630 "iops": 3209.5300427288344, 00:30:14.630 "mibps": 401.1912553411043, 00:30:14.630 "io_failed": 0, 00:30:14.630 "io_timeout": 0, 00:30:14.630 "avg_latency_us": 4982.814687613418, 00:30:14.630 "min_latency_us": 590.5066666666667, 00:30:14.630 "max_latency_us": 9448.106666666667 00:30:14.630 } 00:30:14.630 ], 00:30:14.630 "core_count": 1 00:30:14.630 } 00:30:14.630 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:14.631 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:14.631 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:14.631 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:14.631 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:14.631 | select(.opcode=="crc32c") 00:30:14.631 | "\(.module_name) \(.executed)"' 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4054032 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4054032 ']' 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4054032 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4054032 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4054032' 00:30:14.891 killing process with pid 4054032 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4054032 00:30:14.891 Received shutdown signal, test time was about 2.000000 seconds 00:30:14.891 00:30:14.891 Latency(us) 00:30:14.891 [2024-11-27T09:02:30.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.891 [2024-11-27T09:02:30.357Z] =================================================================================================================== 00:30:14.891 [2024-11-27T09:02:30.357Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:14.891 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4054032 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4054866 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4054866 /var/tmp/bperf.sock 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4054866 ']' 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:15.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.152 10:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:15.152 [2024-11-27 10:02:30.463635] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:15.152 [2024-11-27 10:02:30.463701] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4054866 ] 00:30:15.153 [2024-11-27 10:02:30.549111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.153 [2024-11-27 10:02:30.578906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.093 10:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:16.093 10:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:16.093 10:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:16.093 10:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:16.093 10:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:16.093 10:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:16.093 10:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:16.354 nvme0n1 00:30:16.354 10:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:16.354 10:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:16.614 Running I/O for 2 seconds... 00:30:18.566 30155.00 IOPS, 117.79 MiB/s [2024-11-27T09:02:34.032Z] 30309.50 IOPS, 118.40 MiB/s 00:30:18.566 Latency(us) 00:30:18.566 [2024-11-27T09:02:34.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.566 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.566 nvme0n1 : 2.00 30327.92 118.47 0.00 0.00 4216.01 2088.96 15510.19 00:30:18.566 [2024-11-27T09:02:34.032Z] =================================================================================================================== 00:30:18.566 [2024-11-27T09:02:34.032Z] Total : 30327.92 118.47 0.00 0.00 4216.01 2088.96 15510.19 00:30:18.566 { 00:30:18.566 "results": [ 00:30:18.566 { 00:30:18.566 "job": "nvme0n1", 00:30:18.566 "core_mask": "0x2", 00:30:18.566 "workload": "randwrite", 00:30:18.566 "status": "finished", 00:30:18.566 "queue_depth": 128, 00:30:18.566 "io_size": 4096, 00:30:18.566 "runtime": 2.003006, 00:30:18.566 "iops": 30327.91714053777, 00:30:18.566 "mibps": 118.46842633022567, 00:30:18.566 "io_failed": 0, 00:30:18.566 "io_timeout": 0, 00:30:18.566 "avg_latency_us": 4216.0128831602115, 00:30:18.566 "min_latency_us": 2088.96, 00:30:18.566 "max_latency_us": 15510.186666666666 00:30:18.566 } 00:30:18.566 ], 00:30:18.566 "core_count": 1 00:30:18.566 } 00:30:18.566 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:18.566 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:18.566 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:18.566 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:18.566 | select(.opcode=="crc32c") 00:30:18.566 | "\(.module_name) \(.executed)"' 00:30:18.566 10:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4054866 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4054866 ']' 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4054866 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4054866 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4054866' 00:30:18.851 killing process with pid 4054866 00:30:18.851 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4054866 00:30:18.851 Received shutdown signal, test time was about 2.000000 seconds 00:30:18.851 00:30:18.851 Latency(us) 00:30:18.851 [2024-11-27T09:02:34.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.852 [2024-11-27T09:02:34.318Z] =================================================================================================================== 00:30:18.852 [2024-11-27T09:02:34.318Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4054866 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4055654 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4055654 /var/tmp/bperf.sock 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4055654 ']' 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:18.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.852 10:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:18.852 [2024-11-27 10:02:34.250702] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:18.852 [2024-11-27 10:02:34.250759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4055654 ] 00:30:18.852 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:18.852 Zero copy mechanism will not be used. 00:30:19.112 [2024-11-27 10:02:34.331466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.112 [2024-11-27 10:02:34.360868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.682 10:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.683 10:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:19.683 10:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:19.683 10:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:19.683 10:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:19.943 10:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:19.943 10:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:20.203 nvme0n1 00:30:20.203 10:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:20.203 10:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:20.463 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:20.463 Zero copy mechanism will not be used. 00:30:20.463 Running I/O for 2 seconds... 00:30:22.348 6190.00 IOPS, 773.75 MiB/s [2024-11-27T09:02:37.814Z] 5627.50 IOPS, 703.44 MiB/s 00:30:22.348 Latency(us) 00:30:22.348 [2024-11-27T09:02:37.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.348 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:22.348 nvme0n1 : 2.01 5622.22 702.78 0.00 0.00 2840.42 1228.80 11796.48 00:30:22.348 [2024-11-27T09:02:37.814Z] =================================================================================================================== 00:30:22.348 [2024-11-27T09:02:37.814Z] Total : 5622.22 702.78 0.00 0.00 2840.42 1228.80 11796.48 00:30:22.348 { 00:30:22.348 "results": [ 00:30:22.348 { 00:30:22.348 "job": "nvme0n1", 00:30:22.348 "core_mask": "0x2", 00:30:22.348 "workload": "randwrite", 00:30:22.348 "status": "finished", 00:30:22.348 "queue_depth": 16, 00:30:22.348 "io_size": 131072, 00:30:22.348 "runtime": 2.005614, 00:30:22.348 "iops": 5622.218432858965, 00:30:22.348 "mibps": 702.7773041073706, 00:30:22.348 "io_failed": 0, 00:30:22.348 "io_timeout": 0, 00:30:22.348 "avg_latency_us": 2840.4239801348, 00:30:22.348 "min_latency_us": 1228.8, 00:30:22.348 "max_latency_us": 11796.48 00:30:22.348 } 00:30:22.348 ], 00:30:22.348 "core_count": 1 00:30:22.348 } 00:30:22.348 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:22.348 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:22.348 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:22.348 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:22.348 | select(.opcode=="crc32c") 00:30:22.348 | "\(.module_name) \(.executed)"' 00:30:22.348 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:22.609 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:22.609 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:22.609 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:22.609 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:22.609 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4055654 00:30:22.609 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4055654 ']' 00:30:22.609 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4055654 00:30:22.609 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:22.609 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.609 10:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4055654 00:30:22.609 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:22.609 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:22.609 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4055654' 00:30:22.609 killing process with pid 4055654 00:30:22.609 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4055654 00:30:22.609 Received shutdown signal, test time was about 2.000000 seconds 00:30:22.609 00:30:22.609 Latency(us) 00:30:22.609 [2024-11-27T09:02:38.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.609 [2024-11-27T09:02:38.075Z] =================================================================================================================== 00:30:22.609 [2024-11-27T09:02:38.075Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:22.609 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4055654 00:30:22.870 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4053306 00:30:22.870 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4053306 ']' 00:30:22.870 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4053306 00:30:22.870 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:22.870 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.870 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4053306 00:30:22.870 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:22.870 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:22.870 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4053306' 00:30:22.870 killing process with pid 4053306 00:30:22.870 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4053306 00:30:22.870 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4053306 00:30:22.870 00:30:22.870 real 0m16.239s 00:30:22.870 user 0m32.564s 00:30:22.870 sys 0m3.768s 00:30:22.870 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:22.870 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:22.870 ************************************ 00:30:22.870 END TEST nvmf_digest_clean 00:30:22.870 ************************************ 00:30:23.130 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:23.130 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:23.130 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.130 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.130 ************************************ 00:30:23.130 START TEST nvmf_digest_error 00:30:23.130 ************************************ 00:30:23.130 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:30:23.130 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:23.130 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.131 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.131 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:23.131 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=4056421 00:30:23.131 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 4056421 00:30:23.131 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:23.131 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4056421 ']' 00:30:23.131 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.131 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.131 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.131 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.131 10:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:23.131 [2024-11-27 10:02:38.447478] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:23.131 [2024-11-27 10:02:38.447561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.131 [2024-11-27 10:02:38.544251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.131 [2024-11-27 10:02:38.577204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.131 [2024-11-27 10:02:38.577235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.131 [2024-11-27 10:02:38.577241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.131 [2024-11-27 10:02:38.577246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.131 [2024-11-27 10:02:38.577250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.131 [2024-11-27 10:02:38.577730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.073 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.073 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.074 [2024-11-27 10:02:39.279736] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.074 null0 00:30:24.074 [2024-11-27 10:02:39.357853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.074 [2024-11-27 10:02:39.382048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4056640 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4056640 /var/tmp/bperf.sock 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4056640 ']' 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:24.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.074 10:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.074 [2024-11-27 10:02:39.438485] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:24.074 [2024-11-27 10:02:39.438534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4056640 ] 00:30:24.074 [2024-11-27 10:02:39.520768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.334 [2024-11-27 10:02:39.550647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.906 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.906 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:24.906 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:24.906 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:25.167 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:25.167 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.167 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:25.167 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.167 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:25.167 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:25.426 nvme0n1 00:30:25.426 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:25.426 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.426 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:25.427 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.427 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:25.427 10:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:25.427 Running I/O for 2 seconds... 00:30:25.427 [2024-11-27 10:02:40.780169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.427 [2024-11-27 10:02:40.780207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.427 [2024-11-27 10:02:40.780217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.427 [2024-11-27 10:02:40.791222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.427 [2024-11-27 10:02:40.791241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.427 [2024-11-27 10:02:40.791249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.427 [2024-11-27 10:02:40.799705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.427 [2024-11-27 10:02:40.799725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.427 [2024-11-27 10:02:40.799732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.427 [2024-11-27 10:02:40.810858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.427 [2024-11-27 10:02:40.810878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.427 [2024-11-27 10:02:40.810885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.427 [2024-11-27 10:02:40.819126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.427 [2024-11-27 10:02:40.819145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.427 [2024-11-27 10:02:40.819152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.427 [2024-11-27 10:02:40.828639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.427 [2024-11-27 10:02:40.828658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.427 [2024-11-27 10:02:40.828665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.427 [2024-11-27 10:02:40.836902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.427 [2024-11-27 10:02:40.836920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.427 [2024-11-27 10:02:40.836927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.427 [2024-11-27 10:02:40.846904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.427 [2024-11-27 10:02:40.846924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.427 [2024-11-27 10:02:40.846930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.427 [2024-11-27 10:02:40.855476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.427 [2024-11-27 10:02:40.855494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.427 [2024-11-27 10:02:40.855501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.427 [2024-11-27 10:02:40.865406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.427 [2024-11-27 10:02:40.865425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.427 [2024-11-27 10:02:40.865431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.427 [2024-11-27 10:02:40.876546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.427 [2024-11-27 10:02:40.876564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.427 [2024-11-27 10:02:40.876575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.427 [2024-11-27 10:02:40.885070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.427 [2024-11-27 10:02:40.885088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.427 [2024-11-27 10:02:40.885095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:40.895204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:40.895224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:40.895231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:40.906239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:40.906257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:40.906264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:40.913835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:40.913852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:40.913859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:40.923635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:40.923653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:40.923660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:40.933011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:40.933029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:40.933036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:40.942240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:40.942262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:40.942271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:40.951476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:40.951494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:40.951500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:40.959461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:40.959479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:40.959486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:40.969341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:40.969358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:40.969365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:40.977707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:40.977724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:40.977731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:40.987508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:40.987526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:40.987533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:40.996041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:40.996061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:40.996068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:41.005163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:41.005181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:41.005187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:41.013755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:41.013772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.687 [2024-11-27 10:02:41.013779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.687 [2024-11-27 10:02:41.022767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.687 [2024-11-27 10:02:41.022785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.022791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.688 [2024-11-27 10:02:41.032142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.688 [2024-11-27 10:02:41.032165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.032172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.688 [2024-11-27 10:02:41.041654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.688 [2024-11-27 10:02:41.041672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.041679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.688 [2024-11-27 10:02:41.050282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.688 [2024-11-27 10:02:41.050299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.050306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.688 [2024-11-27 10:02:41.059362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.688 [2024-11-27 10:02:41.059380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.059386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.688 [2024-11-27 10:02:41.067589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.688 [2024-11-27 10:02:41.067607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.067613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.688 [2024-11-27 10:02:41.076258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.688 [2024-11-27 10:02:41.076275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.076282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.688 [2024-11-27 10:02:41.085684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.688 [2024-11-27 10:02:41.085702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.085708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.688 [2024-11-27 10:02:41.098410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.688 [2024-11-27 10:02:41.098428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.098440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.688 [2024-11-27 10:02:41.106790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.688 [2024-11-27 10:02:41.106807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.106813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.688 [2024-11-27 10:02:41.118753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.688 [2024-11-27 10:02:41.118772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.118779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.688 [2024-11-27 10:02:41.130863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.688 [2024-11-27 10:02:41.130881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.130888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.688 [2024-11-27 10:02:41.138511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.688 [2024-11-27 10:02:41.138529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.138535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.688 [2024-11-27 10:02:41.147857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.688 [2024-11-27 10:02:41.147875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.688 [2024-11-27 10:02:41.147881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.156574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.156592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.156599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.165282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.165299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.165306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.175922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.175940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.175946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.184401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.184422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.184429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.193506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.193524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.193531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.201987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.202004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.202011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.211720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.211739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.211746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.220469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.220487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.220494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.229279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.229297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.229304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.239136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.239154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.239164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.247297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.247314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.247320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.257084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.257102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.257108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.268443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.268463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.268471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.276740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.276757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.276764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.285761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.285779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.285786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.294846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.294864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.949 [2024-11-27 10:02:41.294871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.949 [2024-11-27 10:02:41.303842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.949 [2024-11-27 10:02:41.303859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.950 [2024-11-27 10:02:41.303867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.950 [2024-11-27 10:02:41.312150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.950 [2024-11-27 10:02:41.312171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.950 [2024-11-27 10:02:41.312178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.950 [2024-11-27 10:02:41.321680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.950 [2024-11-27 10:02:41.321697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.950 [2024-11-27 10:02:41.321704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.950 [2024-11-27 10:02:41.331600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.950 [2024-11-27 10:02:41.331617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.950 [2024-11-27 10:02:41.331624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.950 [2024-11-27 10:02:41.339680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.950 [2024-11-27 10:02:41.339699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.950 [2024-11-27 10:02:41.339709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.950 [2024-11-27 10:02:41.350329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.950 [2024-11-27 10:02:41.350346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.950 [2024-11-27 10:02:41.350355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.950 [2024-11-27 10:02:41.359272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.950 [2024-11-27 10:02:41.359289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.950 [2024-11-27 10:02:41.359296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.950 [2024-11-27 10:02:41.368465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.950 [2024-11-27 10:02:41.368483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.950 [2024-11-27 10:02:41.368489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.950 [2024-11-27 10:02:41.377639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.950 [2024-11-27 10:02:41.377657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.950 [2024-11-27 10:02:41.377664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.950 [2024-11-27 10:02:41.386112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.950 [2024-11-27 10:02:41.386130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.950 [2024-11-27 10:02:41.386137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.950 [2024-11-27 10:02:41.397433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.950 [2024-11-27 10:02:41.397451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.950 [2024-11-27 10:02:41.397460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.950 [2024-11-27 10:02:41.407118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:25.950 [2024-11-27 10:02:41.407135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.950 [2024-11-27 10:02:41.407142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.211 [2024-11-27 10:02:41.416174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.211 [2024-11-27 10:02:41.416192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.211 [2024-11-27 10:02:41.416198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.211 [2024-11-27 10:02:41.423726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.211 [2024-11-27 10:02:41.423744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.211 [2024-11-27 10:02:41.423751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.211 [2024-11-27 10:02:41.434422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.211 [2024-11-27 10:02:41.434440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.211 [2024-11-27 10:02:41.434447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.211 [2024-11-27 10:02:41.443668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.211 [2024-11-27 10:02:41.443687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.211 [2024-11-27 10:02:41.443695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.211 [2024-11-27 10:02:41.455155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.211 [2024-11-27 10:02:41.455177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.211 [2024-11-27 10:02:41.455183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.211 [2024-11-27 10:02:41.463733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.211 [2024-11-27 10:02:41.463750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.211 [2024-11-27 10:02:41.463757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.211 [2024-11-27 10:02:41.472349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.211 [2024-11-27 10:02:41.472366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.211 [2024-11-27 10:02:41.472372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.211 [2024-11-27 10:02:41.480834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.211 [2024-11-27 10:02:41.480851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.211 [2024-11-27 10:02:41.480858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.489997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.490015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.490025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.499368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.499385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.499394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.508568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.508584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.508591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.517369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.517387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.517393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.526353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.526370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.526376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.534946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.534964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.534970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.546388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.546406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.546412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.558224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.558241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.558248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.565777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.565793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.565800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.576129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.576145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.576152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.586624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.586644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.586651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.595855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.595871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.595878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.604254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.604272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.604278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.614139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.614156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.614167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.623139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.623157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.623167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.632253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.632270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.632277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.641718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.641735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.641742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.650653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.650670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.650677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.659492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.659508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.659515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.212 [2024-11-27 10:02:41.667601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.212 [2024-11-27 10:02:41.667618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.212 [2024-11-27 10:02:41.667625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.676755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.676773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.676779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.686087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.686104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.686111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.695726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.695743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.695750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.707623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.707640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.707647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.719141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.719162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.719169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.727000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.727017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.727023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.736296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.736313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.736320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.745838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.745855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.745866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.755551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.755568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.755574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 26886.00 IOPS, 105.02 MiB/s [2024-11-27T09:02:41.940Z] [2024-11-27 10:02:41.764568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.764585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.764592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.774585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.774603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.774610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.784051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.784069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.784078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.793843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.793860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.793867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.802626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.802644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.802650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.811770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.811789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.811796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.821344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.821361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.821368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.830538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.830555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.830562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.838726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.838743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.838750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.847751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.847768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.847775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.474 [2024-11-27 10:02:41.857070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.474 [2024-11-27 10:02:41.857088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.474 [2024-11-27 10:02:41.857094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.475 [2024-11-27 10:02:41.867425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.475 [2024-11-27 10:02:41.867443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.475 [2024-11-27 10:02:41.867450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.475 [2024-11-27 10:02:41.876763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.475 [2024-11-27 10:02:41.876781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.475 [2024-11-27 10:02:41.876787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.475 [2024-11-27 10:02:41.884392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.475 [2024-11-27 10:02:41.884409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.475 [2024-11-27 10:02:41.884416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.475 [2024-11-27 10:02:41.894345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.475 [2024-11-27 10:02:41.894363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.475 [2024-11-27 10:02:41.894370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.475 [2024-11-27 10:02:41.902434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.475 [2024-11-27 10:02:41.902451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.475 [2024-11-27 10:02:41.902464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.475 [2024-11-27 10:02:41.912050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.475 [2024-11-27 10:02:41.912068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.475 [2024-11-27 10:02:41.912075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.475 [2024-11-27 10:02:41.921354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.475 [2024-11-27 10:02:41.921371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.475 [2024-11-27 10:02:41.921378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.475 [2024-11-27 10:02:41.929607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.475 [2024-11-27 10:02:41.929624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.475 [2024-11-27 10:02:41.929630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:41.939564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:41.939582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:41.939589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:41.949978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:41.949996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:41.950002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:41.959191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:41.959209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:41.959216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:41.968992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:41.969010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:41.969017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:41.977407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:41.977425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:41.977431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:41.985968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:41.985989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:41.985996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:41.996328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:41.996345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:41.996352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:42.006153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:42.006174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:42.006181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:42.013412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:42.013429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:42.013435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:42.023671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:42.023688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:42.023695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:42.033631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:42.033649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:42.033655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:42.043040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:42.043057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:42.043064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:42.051045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:42.051062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:42.051069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:42.060886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:42.060904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:42.060911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:42.070544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.736 [2024-11-27 10:02:42.070561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.736 [2024-11-27 10:02:42.070568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.736 [2024-11-27 10:02:42.079658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.079676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.079682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.737 [2024-11-27 10:02:42.088133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.088150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.088157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.737 [2024-11-27 10:02:42.097211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.097227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.097234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.737 [2024-11-27 10:02:42.106720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.106737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.106744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.737 [2024-11-27 10:02:42.115805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.115822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.115829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.737 [2024-11-27 10:02:42.126645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.126662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.126669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.737 [2024-11-27 10:02:42.136287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.136304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.136311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.737 [2024-11-27 10:02:42.145147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.145171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.145178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.737 [2024-11-27 10:02:42.153423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.153440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.153447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.737 [2024-11-27 10:02:42.163492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.163510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.163516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.737 [2024-11-27 10:02:42.173301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.173318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.173325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.737 [2024-11-27 10:02:42.181248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.181265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.181272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.737 [2024-11-27 10:02:42.190948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.190966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.190973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.737 [2024-11-27 10:02:42.199831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.737 [2024-11-27 10:02:42.199852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.737 [2024-11-27 10:02:42.199858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.209079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.209097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.209103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.218187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.218204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.218211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.226522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.226539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.226546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.235523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.235541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.235548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.244441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.244458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.244465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.253443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.253460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.253467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.261823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.261840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.261847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.270999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.271015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.271022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.281579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.281597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.281604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.291601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.291618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.291624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.301032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.301050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.301060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.308961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.308979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.308986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.319001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.319018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.319025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.327689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.327707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.327713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.337197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.337215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.337221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.346150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.346171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.346179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.354769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.354786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.354792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.363640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.363658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.363665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.372790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.372808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.372815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.381991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.382012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.382018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.390948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.390965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.390971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.399519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.399536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.399542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.408813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.408833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.408840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.416810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.416830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.416838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.426109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.426127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.999 [2024-11-27 10:02:42.426133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.999 [2024-11-27 10:02:42.435167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:26.999 [2024-11-27 10:02:42.435185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.000 [2024-11-27 10:02:42.435192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.000 [2024-11-27 10:02:42.444050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.000 [2024-11-27 10:02:42.444067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.000 [2024-11-27 10:02:42.444074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.000 [2024-11-27 10:02:42.454370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.000 [2024-11-27 10:02:42.454388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.000 [2024-11-27 10:02:42.454394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.000 [2024-11-27 10:02:42.463167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.000 [2024-11-27 10:02:42.463184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.000 [2024-11-27 10:02:42.463191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.261 [2024-11-27 10:02:42.471635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.261 [2024-11-27 10:02:42.471654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.261 [2024-11-27 10:02:42.471662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.261 [2024-11-27 10:02:42.481197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.261 [2024-11-27 10:02:42.481215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.261 [2024-11-27 10:02:42.481223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.261 [2024-11-27 10:02:42.490597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.261 [2024-11-27 10:02:42.490614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.261 [2024-11-27 10:02:42.490621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.261 [2024-11-27 10:02:42.499742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.261 [2024-11-27 10:02:42.499760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.261 [2024-11-27 10:02:42.499766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.261 [2024-11-27 10:02:42.508482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.261 [2024-11-27 10:02:42.508500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.261 [2024-11-27 10:02:42.508506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.261 [2024-11-27 10:02:42.518492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.261 [2024-11-27 10:02:42.518509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.261 [2024-11-27 10:02:42.518516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.261 [2024-11-27 10:02:42.526929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.261 [2024-11-27 10:02:42.526947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.261 [2024-11-27 10:02:42.526953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.261 [2024-11-27 10:02:42.536198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.261 [2024-11-27 10:02:42.536215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.261 [2024-11-27 10:02:42.536225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.261 [2024-11-27 10:02:42.544520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.261 [2024-11-27 10:02:42.544537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.261 [2024-11-27 10:02:42.544544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.261 [2024-11-27 10:02:42.553334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.261 [2024-11-27 10:02:42.553352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.261 [2024-11-27 10:02:42.553359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.261 [2024-11-27 10:02:42.562964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.261 [2024-11-27 10:02:42.562982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.261 [2024-11-27 10:02:42.562989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.570333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.570350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.570357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.580257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.580275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.580282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.590043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.590061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.590068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.599370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.599388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.599396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.607555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.607572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.607579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.617800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.617818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.617827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.626733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.626751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.626758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.634990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.635007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.635014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.643703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.643721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.643728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.653049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.653067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.653074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.662177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.662195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.662202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.671497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.671514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.671521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.679660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.679678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.679684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.690324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.690341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.690351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.700268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.700286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.700292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.709978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.709995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.710001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.262 [2024-11-27 10:02:42.718322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.262 [2024-11-27 10:02:42.718339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.262 [2024-11-27 10:02:42.718345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.523 [2024-11-27 10:02:42.728512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.523 [2024-11-27 10:02:42.728530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.523 [2024-11-27 10:02:42.728537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.523 [2024-11-27 10:02:42.736589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.523 [2024-11-27 10:02:42.736608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.523 [2024-11-27 10:02:42.736614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.523 [2024-11-27 10:02:42.745533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.523 [2024-11-27 10:02:42.745551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.523 [2024-11-27 10:02:42.745558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.523 [2024-11-27 10:02:42.754759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.523 [2024-11-27 10:02:42.754777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.523 [2024-11-27 10:02:42.754783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.523 27386.00 IOPS, 106.98 MiB/s [2024-11-27T09:02:42.989Z] [2024-11-27 10:02:42.764351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x125d700) 00:30:27.523 [2024-11-27 10:02:42.764368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.523 [2024-11-27 10:02:42.764375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.523 00:30:27.523 Latency(us) 00:30:27.523 [2024-11-27T09:02:42.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.523 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:27.523 nvme0n1 : 2.00 27407.97 107.06 0.00 0.00 4665.89 2157.23 16602.45 00:30:27.523 [2024-11-27T09:02:42.989Z] =================================================================================================================== 00:30:27.523 [2024-11-27T09:02:42.989Z] Total : 27407.97 107.06 0.00 0.00 4665.89 2157.23 16602.45 00:30:27.523 { 00:30:27.523 "results": [ 00:30:27.523 { 00:30:27.523 "job": "nvme0n1", 00:30:27.523 "core_mask": "0x2", 00:30:27.523 "workload": "randread", 00:30:27.523 "status": "finished", 00:30:27.523 "queue_depth": 128, 00:30:27.523 "io_size": 4096, 00:30:27.523 "runtime": 2.003067, 00:30:27.523 "iops": 27407.969878191794, 00:30:27.523 "mibps": 107.0623823366867, 00:30:27.523 "io_failed": 0, 00:30:27.523 "io_timeout": 0, 00:30:27.523 "avg_latency_us": 4665.893490710382, 00:30:27.523 "min_latency_us": 2157.2266666666665, 00:30:27.523 "max_latency_us": 16602.453333333335 00:30:27.523 } 00:30:27.523 ], 00:30:27.523 "core_count": 1 00:30:27.523 } 00:30:27.523 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:27.523 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:27.523 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:27.523 | .driver_specific 00:30:27.523 | .nvme_error 00:30:27.523 | .status_code 00:30:27.523 | .command_transient_transport_error' 00:30:27.523 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:27.523 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:30:27.523 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4056640 00:30:27.523 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4056640 ']' 00:30:27.523 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4056640 00:30:27.523 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:27.523 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.523 10:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4056640 00:30:27.783 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:27.783 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:27.783 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4056640' 00:30:27.783 killing process with pid 4056640 00:30:27.783 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4056640 00:30:27.783 Received shutdown signal, test time was about 2.000000 seconds 00:30:27.783 00:30:27.783 Latency(us) 00:30:27.783 [2024-11-27T09:02:43.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.783 [2024-11-27T09:02:43.249Z] =================================================================================================================== 00:30:27.783 [2024-11-27T09:02:43.249Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:27.783 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4056640 00:30:27.783 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:27.784 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:27.784 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:27.784 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:27.784 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:27.784 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4057415 00:30:27.784 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4057415 /var/tmp/bperf.sock 00:30:27.784 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4057415 ']' 00:30:27.784 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:27.784 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:27.784 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.784 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:27.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:27.784 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.784 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:27.784 [2024-11-27 10:02:43.182816] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:27.784 [2024-11-27 10:02:43.182875] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4057415 ] 00:30:27.784 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:27.784 Zero copy mechanism will not be used. 00:30:28.044 [2024-11-27 10:02:43.267224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.044 [2024-11-27 10:02:43.296412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.615 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.615 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:28.615 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:28.615 10:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:28.875 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:28.875 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.875 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:28.875 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.875 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:28.875 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:29.136 nvme0n1 00:30:29.136 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:29.136 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.136 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:29.136 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.136 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:29.136 10:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:29.136 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:29.136 Zero copy mechanism will not be used. 00:30:29.136 Running I/O for 2 seconds... 00:30:29.136 [2024-11-27 10:02:44.527571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.136 [2024-11-27 10:02:44.527604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.136 [2024-11-27 10:02:44.527612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.136 [2024-11-27 10:02:44.536973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.136 [2024-11-27 10:02:44.536996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.136 [2024-11-27 10:02:44.537003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.136 [2024-11-27 10:02:44.549506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.136 [2024-11-27 10:02:44.549526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.136 [2024-11-27 10:02:44.549533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.136 [2024-11-27 10:02:44.559619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.136 [2024-11-27 10:02:44.559638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.136 [2024-11-27 10:02:44.559645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.136 [2024-11-27 10:02:44.571006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.136 [2024-11-27 10:02:44.571025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.136 [2024-11-27 10:02:44.571032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.136 [2024-11-27 10:02:44.576154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.136 [2024-11-27 10:02:44.576179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.136 [2024-11-27 10:02:44.576185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.136 [2024-11-27 10:02:44.585047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.136 [2024-11-27 10:02:44.585067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.136 [2024-11-27 10:02:44.585074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.136 [2024-11-27 10:02:44.595086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.136 [2024-11-27 10:02:44.595106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.136 [2024-11-27 10:02:44.595117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.397 [2024-11-27 10:02:44.606103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.397 [2024-11-27 10:02:44.606121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-27 10:02:44.606127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.397 [2024-11-27 10:02:44.616305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.397 [2024-11-27 10:02:44.616324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-27 10:02:44.616331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.397 [2024-11-27 10:02:44.620655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.397 [2024-11-27 10:02:44.620673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-27 10:02:44.620680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.397 [2024-11-27 10:02:44.623167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.397 [2024-11-27 10:02:44.623185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-27 10:02:44.623192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.397 [2024-11-27 10:02:44.633239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.397 [2024-11-27 10:02:44.633257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.397 [2024-11-27 10:02:44.633263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.397 [2024-11-27 10:02:44.644309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.397 [2024-11-27 10:02:44.644327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.644334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.654976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.654993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.655000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.664627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.664645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.664651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.669407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.669430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.669437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.679279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.679297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.679304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.688328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.688346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.688353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.692942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.692959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.692966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.697441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.697460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.697466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.708141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.708165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.708172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.716053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.716071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.716077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.725180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.725199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.725205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.735207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.735225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.735232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.741016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.741035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.741041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.745814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.745832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.745839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.749878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.749897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.749903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.754202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.754220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.754226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.764530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.764548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.764555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.776550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.776568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.776574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.788216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.788235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.788241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.799232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.799250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.799256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.809669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.809688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.809700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.820949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.820968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.820974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.830982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.831001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.831007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.836302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.836320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.836327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.841076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.841095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.841101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.851060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.851078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.851085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.398 [2024-11-27 10:02:44.857244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.398 [2024-11-27 10:02:44.857262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.398 [2024-11-27 10:02:44.857268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.660 [2024-11-27 10:02:44.865804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.660 [2024-11-27 10:02:44.865823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-27 10:02:44.865829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.660 [2024-11-27 10:02:44.870131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.660 [2024-11-27 10:02:44.870150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-27 10:02:44.870156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.660 [2024-11-27 10:02:44.875899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.660 [2024-11-27 10:02:44.875922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-27 10:02:44.875928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.660 [2024-11-27 10:02:44.884078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.660 [2024-11-27 10:02:44.884096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-27 10:02:44.884103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.660 [2024-11-27 10:02:44.893086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.660 [2024-11-27 10:02:44.893106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-27 10:02:44.893112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.660 [2024-11-27 10:02:44.901555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.660 [2024-11-27 10:02:44.901573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.660 [2024-11-27 10:02:44.901579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.660 [2024-11-27 10:02:44.908604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.660 [2024-11-27 10:02:44.908622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.908628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.912997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.913016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.913022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.917459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.917477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.917483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.921954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.921973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.921979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.927787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.927805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.927811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.932364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.932382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.932389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.939856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.939874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.939880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.948030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.948050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.948056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.952745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.952763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.952769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.957340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.957358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.957364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.968501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.968520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.968526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.974023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.974041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.974047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.978393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.978411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.978417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.984134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.984152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.984167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.994776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.994795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.994801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:44.999932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:44.999950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:44.999956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:45.004269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:45.004287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:45.004293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:45.015263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:45.015281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:45.015287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:45.020114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:45.020132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:45.020138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:45.028019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:45.028037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:45.028044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:45.040043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:45.040061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.661 [2024-11-27 10:02:45.040068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.661 [2024-11-27 10:02:45.050207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.661 [2024-11-27 10:02:45.050225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.662 [2024-11-27 10:02:45.050232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.662 [2024-11-27 10:02:45.059712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.662 [2024-11-27 10:02:45.059733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.662 [2024-11-27 10:02:45.059739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.662 [2024-11-27 10:02:45.064028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.662 [2024-11-27 10:02:45.064046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.662 [2024-11-27 10:02:45.064052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.662 [2024-11-27 10:02:45.071413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.662 [2024-11-27 10:02:45.071431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.662 [2024-11-27 10:02:45.071438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.662 [2024-11-27 10:02:45.076533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.662 [2024-11-27 10:02:45.076551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.662 [2024-11-27 10:02:45.076557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.662 [2024-11-27 10:02:45.086495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.662 [2024-11-27 10:02:45.086513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.662 [2024-11-27 10:02:45.086520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.662 [2024-11-27 10:02:45.095577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.662 [2024-11-27 10:02:45.095595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.662 [2024-11-27 10:02:45.095601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.662 [2024-11-27 10:02:45.103796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.662 [2024-11-27 10:02:45.103814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.662 [2024-11-27 10:02:45.103820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.662 [2024-11-27 10:02:45.114828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.662 [2024-11-27 10:02:45.114846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.662 [2024-11-27 10:02:45.114852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.662 [2024-11-27 10:02:45.119933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.662 [2024-11-27 10:02:45.119950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.662 [2024-11-27 10:02:45.119957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.662 [2024-11-27 10:02:45.124353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.662 [2024-11-27 10:02:45.124371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.662 [2024-11-27 10:02:45.124377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.923 [2024-11-27 10:02:45.129195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.923 [2024-11-27 10:02:45.129213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.923 [2024-11-27 10:02:45.129219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.923 [2024-11-27 10:02:45.136095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.923 [2024-11-27 10:02:45.136113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.923 [2024-11-27 10:02:45.136119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.923 [2024-11-27 10:02:45.144466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.923 [2024-11-27 10:02:45.144484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.923 [2024-11-27 10:02:45.144490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.923 [2024-11-27 10:02:45.152330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.923 [2024-11-27 10:02:45.152347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.923 [2024-11-27 10:02:45.152353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.923 [2024-11-27 10:02:45.155386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.923 [2024-11-27 10:02:45.155404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.923 [2024-11-27 10:02:45.155410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.923 [2024-11-27 10:02:45.159156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.923 [2024-11-27 10:02:45.159177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.923 [2024-11-27 10:02:45.159183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.923 [2024-11-27 10:02:45.163736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.923 [2024-11-27 10:02:45.163753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.923 [2024-11-27 10:02:45.163759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.923 [2024-11-27 10:02:45.168312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.923 [2024-11-27 10:02:45.168329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.923 [2024-11-27 10:02:45.168339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.923 [2024-11-27 10:02:45.174383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.923 [2024-11-27 10:02:45.174400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.923 [2024-11-27 10:02:45.174406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.923 [2024-11-27 10:02:45.178818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.923 [2024-11-27 10:02:45.178835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.923 [2024-11-27 10:02:45.178841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.923 [2024-11-27 10:02:45.183259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.923 [2024-11-27 10:02:45.183276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.923 [2024-11-27 10:02:45.183282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.923 [2024-11-27 10:02:45.188943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.188960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.188967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.197986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.198003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.198009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.202438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.202455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.202462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.206815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.206833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.206840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.211218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.211235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.211242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.215905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.215922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.215928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.224626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.224643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.224650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.230418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.230435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.230441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.240185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.240202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.240208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.250764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.250781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.250787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.263119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.263136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.263142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.275610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.275626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.275632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.289135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.289152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.289164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.300255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.300273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.300282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.312074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.312091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.312097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.323287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.323304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.323310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.334127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.334145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.334151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.338519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.338538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.338544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.345589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.345606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.345612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.354214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.354232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.354238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.360163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.360181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.360187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.365271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.365289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.365295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.373924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.373945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.373952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:29.924 [2024-11-27 10:02:45.382412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:29.924 [2024-11-27 10:02:45.382430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.924 [2024-11-27 10:02:45.382436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.185 [2024-11-27 10:02:45.394300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.185 [2024-11-27 10:02:45.394318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.185 [2024-11-27 10:02:45.394325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.185 [2024-11-27 10:02:45.407125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.185 [2024-11-27 10:02:45.407143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.185 [2024-11-27 10:02:45.407149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.185 [2024-11-27 10:02:45.418694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.185 [2024-11-27 10:02:45.418713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.185 [2024-11-27 10:02:45.418721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.185 [2024-11-27 10:02:45.430618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.185 [2024-11-27 10:02:45.430637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.185 [2024-11-27 10:02:45.430643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.185 [2024-11-27 10:02:45.442408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.185 [2024-11-27 10:02:45.442426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.185 [2024-11-27 10:02:45.442433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.185 [2024-11-27 10:02:45.453480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.185 [2024-11-27 10:02:45.453498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.185 [2024-11-27 10:02:45.453504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.185 [2024-11-27 10:02:45.465776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.185 [2024-11-27 10:02:45.465795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.185 [2024-11-27 10:02:45.465801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.185 [2024-11-27 10:02:45.478728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.185 [2024-11-27 10:02:45.478746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.478753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.486691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.486709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.486715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.492761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.492779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.492785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.503643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.503662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.503668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.514091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.514110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.514116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.521806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.521824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.521831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.186 3879.00 IOPS, 484.88 MiB/s [2024-11-27T09:02:45.652Z] [2024-11-27 10:02:45.534637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.534659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.534666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.546027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.546045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.546051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.557924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.557942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.557953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.570574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.570593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.570599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.582606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.582624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.582630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.594881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.594900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.594906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.607415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.607433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.607439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.619965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.619983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.619989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.632483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.632501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.632507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.186 [2024-11-27 10:02:45.645104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.186 [2024-11-27 10:02:45.645122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.186 [2024-11-27 10:02:45.645128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.656695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.656713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.656719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.667696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.667714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.667720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.679631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.679649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.679655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.692638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.692657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.692663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.705165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.705184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.705190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.716661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.716680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.716686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.727821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.727839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.727845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.740057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.740076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.740082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.751632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.751650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.751656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.763200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.763218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.763228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.774396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.774414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.774420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.785975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.785993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.785999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.791505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.791523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.791530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.796193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.796211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.447 [2024-11-27 10:02:45.796217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.447 [2024-11-27 10:02:45.805262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.447 [2024-11-27 10:02:45.805281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.805287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.816617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.816635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.816641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.821471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.821490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.821496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.829168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.829186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.829192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.834192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.834213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.834219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.842640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.842658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.842664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.852557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.852576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.852582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.860240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.860259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.860265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.869142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.869165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.869171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.875860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.875878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.875885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.880438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.880457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.880463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.884866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.884884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.884890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.893654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.893672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.893678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.905272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.905291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.905297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.448 [2024-11-27 10:02:45.911052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.448 [2024-11-27 10:02:45.911071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.448 [2024-11-27 10:02:45.911077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.918774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.918792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.918798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.923658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.923676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.923682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.928124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.928143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.928149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.934729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.934746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.934752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.942223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.942241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.942247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.947019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.947037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.947044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.951530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.951548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.951558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.955936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.955954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.955960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.960298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.960317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.960323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.964968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.964986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.964992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.971716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.971734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.971740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.980645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.980664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.980671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.985214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.985232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.985238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.992080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.992098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.992105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:45.997231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:45.997249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:45.997255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:46.008180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:46.008203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:46.008209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:46.017490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:46.017509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:46.017516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:46.029087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.710 [2024-11-27 10:02:46.029106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.710 [2024-11-27 10:02:46.029112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.710 [2024-11-27 10:02:46.041504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.041523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.041530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.053621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.053640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.053646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.065831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.065850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.065856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.076479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.076497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.076503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.087768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.087786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.087793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.097836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.097855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.097861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.103130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.103148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.103155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.110183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.110201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.110208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.116956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.116975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.116981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.122677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.122695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.122702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.130358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.130376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.130382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.141301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.141320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.141326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.151341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.151360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.151366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.161656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.161675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.161681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.166028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.166047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.166056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.711 [2024-11-27 10:02:46.173732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.711 [2024-11-27 10:02:46.173751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.711 [2024-11-27 10:02:46.173757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.972 [2024-11-27 10:02:46.184025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.972 [2024-11-27 10:02:46.184044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.184050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.190507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.190524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.190531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.193734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.193752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.193758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.199596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.199614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.199621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.204378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.204397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.204403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.212515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.212534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.212540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.222738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.222756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.222762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.233308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.233330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.233336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.242841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.242859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.242865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.254023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.254041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.254047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.261858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.261876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.261882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.271878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.271896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.271902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.281293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.281312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.281318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.293131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.293149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.293156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.303473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.303492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.303498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.314012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.314031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.314037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.325890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.325909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.325915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.336487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.336505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.336511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.348393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.348411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.348418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.359515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.359533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.359540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.366403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.366421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.366427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.373151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.373176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.373182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.383253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.383271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.383277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.394081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.394100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.394106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.399071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.399089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.973 [2024-11-27 10:02:46.399098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.973 [2024-11-27 10:02:46.405016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.973 [2024-11-27 10:02:46.405034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.974 [2024-11-27 10:02:46.405040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.974 [2024-11-27 10:02:46.412277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.974 [2024-11-27 10:02:46.412295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.974 [2024-11-27 10:02:46.412302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:30.974 [2024-11-27 10:02:46.416702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.974 [2024-11-27 10:02:46.416720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.974 [2024-11-27 10:02:46.416726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.974 [2024-11-27 10:02:46.422552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.974 [2024-11-27 10:02:46.422571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.974 [2024-11-27 10:02:46.422577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:30.974 [2024-11-27 10:02:46.426373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.974 [2024-11-27 10:02:46.426392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.974 [2024-11-27 10:02:46.426398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:30.974 [2024-11-27 10:02:46.437248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:30.974 [2024-11-27 10:02:46.437267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.974 [2024-11-27 10:02:46.437273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:31.235 [2024-11-27 10:02:46.446127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:31.235 [2024-11-27 10:02:46.446147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.235 [2024-11-27 10:02:46.446153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:31.235 [2024-11-27 10:02:46.453610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:31.235 [2024-11-27 10:02:46.453629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.235 [2024-11-27 10:02:46.453635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:31.235 [2024-11-27 10:02:46.458751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:31.235 [2024-11-27 10:02:46.458770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.235 [2024-11-27 10:02:46.458776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:31.235 [2024-11-27 10:02:46.464538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:31.235 [2024-11-27 10:02:46.464557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.235 [2024-11-27 10:02:46.464563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:31.235 [2024-11-27 10:02:46.472823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:31.235 [2024-11-27 10:02:46.472841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.235 [2024-11-27 10:02:46.472847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:31.235 [2024-11-27 10:02:46.477172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:31.235 [2024-11-27 10:02:46.477190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.235 [2024-11-27 10:02:46.477196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:31.235 [2024-11-27 10:02:46.481510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:31.235 [2024-11-27 10:02:46.481528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.235 [2024-11-27 10:02:46.481534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:31.235 [2024-11-27 10:02:46.486650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:31.235 [2024-11-27 10:02:46.486668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.235 [2024-11-27 10:02:46.486674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:31.235 [2024-11-27 10:02:46.491848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:31.235 [2024-11-27 10:02:46.491866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.235 [2024-11-27 10:02:46.491872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:31.235 [2024-11-27 10:02:46.497251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:31.235 [2024-11-27 10:02:46.497269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.235 [2024-11-27 10:02:46.497275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:31.235 [2024-11-27 10:02:46.506903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:31.235 [2024-11-27 10:02:46.506922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.235 [2024-11-27 10:02:46.506931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:31.235 [2024-11-27 10:02:46.516829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:31.235 [2024-11-27 10:02:46.516848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.235 [2024-11-27 10:02:46.516854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:31.235 3750.00 IOPS, 468.75 MiB/s [2024-11-27T09:02:46.701Z] [2024-11-27 10:02:46.527063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb98a10) 00:30:31.235 [2024-11-27 10:02:46.527081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.235 [2024-11-27 10:02:46.527088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:31.235 00:30:31.235 Latency(us) 00:30:31.235 [2024-11-27T09:02:46.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.235 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:31.235 nvme0n1 : 2.00 3752.29 469.04 0.00 0.00 4260.30 505.17 13489.49 00:30:31.235 [2024-11-27T09:02:46.701Z] =================================================================================================================== 00:30:31.235 [2024-11-27T09:02:46.701Z] Total : 3752.29 469.04 0.00 0.00 4260.30 505.17 13489.49 00:30:31.235 { 00:30:31.235 "results": [ 00:30:31.235 { 00:30:31.235 "job": "nvme0n1", 00:30:31.235 "core_mask": "0x2", 00:30:31.235 "workload": "randread", 00:30:31.235 "status": "finished", 00:30:31.235 "queue_depth": 16, 00:30:31.235 "io_size": 131072, 00:30:31.235 "runtime": 2.003044, 00:30:31.235 "iops": 3752.289016117469, 00:30:31.235 "mibps": 469.03612701468364, 00:30:31.235 "io_failed": 0, 00:30:31.235 "io_timeout": 0, 00:30:31.235 "avg_latency_us": 4260.300046123825, 00:30:31.235 "min_latency_us": 505.17333333333335, 00:30:31.235 "max_latency_us": 13489.493333333334 00:30:31.235 } 00:30:31.235 ], 00:30:31.235 "core_count": 1 00:30:31.235 } 00:30:31.235 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:31.235 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:31.235 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:31.235 | .driver_specific 00:30:31.235 | .nvme_error 00:30:31.235 | .status_code 00:30:31.235 | .command_transient_transport_error' 00:30:31.235 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:31.496 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 243 > 0 )) 00:30:31.496 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4057415 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4057415 ']' 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4057415 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4057415 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4057415' 00:30:31.497 killing process with pid 4057415 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4057415 00:30:31.497 Received shutdown signal, test time was about 2.000000 seconds 00:30:31.497 00:30:31.497 Latency(us) 00:30:31.497 [2024-11-27T09:02:46.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.497 [2024-11-27T09:02:46.963Z] =================================================================================================================== 00:30:31.497 [2024-11-27T09:02:46.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4057415 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4058133 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4058133 /var/tmp/bperf.sock 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4058133 ']' 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:31.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.497 10:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:31.497 [2024-11-27 10:02:46.957425] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:31.497 [2024-11-27 10:02:46.957479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058133 ] 00:30:31.757 [2024-11-27 10:02:47.040573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.757 [2024-11-27 10:02:47.068020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.327 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.327 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:32.327 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:32.327 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:32.587 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:32.587 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.587 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:32.587 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.587 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:32.587 10:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:33.159 nvme0n1 00:30:33.159 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:33.159 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.159 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:33.159 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.159 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:33.159 10:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:33.159 Running I/O for 2 seconds... 00:30:33.159 [2024-11-27 10:02:48.444515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.159 [2024-11-27 10:02:48.444752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.159 [2024-11-27 10:02:48.444779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.159 [2024-11-27 10:02:48.453496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.159 [2024-11-27 10:02:48.453738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.159 [2024-11-27 10:02:48.453755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.159 [2024-11-27 10:02:48.462496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.159 [2024-11-27 10:02:48.462748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.159 [2024-11-27 10:02:48.462765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.159 [2024-11-27 10:02:48.471441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.159 [2024-11-27 10:02:48.471677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.159 [2024-11-27 10:02:48.471693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.159 [2024-11-27 10:02:48.480357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.159 [2024-11-27 10:02:48.480475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.159 [2024-11-27 10:02:48.480491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.159 [2024-11-27 10:02:48.489303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.159 [2024-11-27 10:02:48.489518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.159 [2024-11-27 10:02:48.489537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.159 [2024-11-27 10:02:48.498233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.159 [2024-11-27 10:02:48.498465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.159 [2024-11-27 10:02:48.498482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.159 [2024-11-27 10:02:48.507118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.159 [2024-11-27 10:02:48.507367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.159 [2024-11-27 10:02:48.507384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.159 [2024-11-27 10:02:48.516085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.159 [2024-11-27 10:02:48.516331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.159 [2024-11-27 10:02:48.516347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.159 [2024-11-27 10:02:48.524944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.159 [2024-11-27 10:02:48.525143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.159 [2024-11-27 10:02:48.525161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.159 [2024-11-27 10:02:48.533882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.159 [2024-11-27 10:02:48.534091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.159 [2024-11-27 10:02:48.534106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.159 [2024-11-27 10:02:48.542797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.159 [2024-11-27 10:02:48.543037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.159 [2024-11-27 10:02:48.543052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.159 [2024-11-27 10:02:48.551640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.159 [2024-11-27 10:02:48.551857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.159 [2024-11-27 10:02:48.551872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.159 [2024-11-27 10:02:48.560560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.160 [2024-11-27 10:02:48.560801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.160 [2024-11-27 10:02:48.560816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.160 [2024-11-27 10:02:48.569423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.160 [2024-11-27 10:02:48.569680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.160 [2024-11-27 10:02:48.569695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.160 [2024-11-27 10:02:48.578356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.160 [2024-11-27 10:02:48.578583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.160 [2024-11-27 10:02:48.578598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.160 [2024-11-27 10:02:48.587247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.160 [2024-11-27 10:02:48.587469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.160 [2024-11-27 10:02:48.587484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.160 [2024-11-27 10:02:48.596116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.160 [2024-11-27 10:02:48.596340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.160 [2024-11-27 10:02:48.596355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.160 [2024-11-27 10:02:48.605026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.160 [2024-11-27 10:02:48.605234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.160 [2024-11-27 10:02:48.605249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.160 [2024-11-27 10:02:48.613894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.160 [2024-11-27 10:02:48.614113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.160 [2024-11-27 10:02:48.614128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.160 [2024-11-27 10:02:48.622787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.160 [2024-11-27 10:02:48.622991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.160 [2024-11-27 10:02:48.623006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.421 [2024-11-27 10:02:48.631685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.421 [2024-11-27 10:02:48.631916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.421 [2024-11-27 10:02:48.631931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.421 [2024-11-27 10:02:48.640562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.640784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.640800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.649373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.649610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.649626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.658263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.658509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.658525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.667253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.667486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.667502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.676153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.676400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.676416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.684983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.685287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.685303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.693855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.694075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.694090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.702691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.702950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.702965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.711582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.711875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.711891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.720467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.720684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.720701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.729356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.729588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.729603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.738176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.738510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.738525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.747055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.747338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.747354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.755987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.756201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.756216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.764859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.765069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.765085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.773722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.773952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.773967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.782593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.782807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.782823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.791506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.791721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.791736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.800365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.800583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.800599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.809292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.809534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.809549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.818189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.818290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.818305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.826985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.827227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.827242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.835829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.836070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.836085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.422 [2024-11-27 10:02:48.844756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.422 [2024-11-27 10:02:48.844970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.422 [2024-11-27 10:02:48.844985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.423 [2024-11-27 10:02:48.853673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.423 [2024-11-27 10:02:48.853884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.423 [2024-11-27 10:02:48.853899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.423 [2024-11-27 10:02:48.862513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.423 [2024-11-27 10:02:48.862718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.423 [2024-11-27 10:02:48.862734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.423 [2024-11-27 10:02:48.871507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.423 [2024-11-27 10:02:48.871707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.423 [2024-11-27 10:02:48.871722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.423 [2024-11-27 10:02:48.880372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.423 [2024-11-27 10:02:48.880601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.423 [2024-11-27 10:02:48.880616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.684 [2024-11-27 10:02:48.889257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.684 [2024-11-27 10:02:48.889495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.684 [2024-11-27 10:02:48.889510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.684 [2024-11-27 10:02:48.898240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.684 [2024-11-27 10:02:48.898464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.684 [2024-11-27 10:02:48.898480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.684 [2024-11-27 10:02:48.907117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.684 [2024-11-27 10:02:48.907374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.684 [2024-11-27 10:02:48.907390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.684 [2024-11-27 10:02:48.915995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.684 [2024-11-27 10:02:48.916227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.684 [2024-11-27 10:02:48.916242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.684 [2024-11-27 10:02:48.924844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:48.925044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:48.925059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:48.933729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:48.933940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:48.933955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:48.942593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:48.942817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:48.942832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:48.951506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:48.951763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:48.951787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:48.960402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:48.960619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:48.960634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:48.969273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:48.969489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:48.969505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:48.978122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:48.978363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:48.978378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:48.987052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:48.987298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:48.987314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:48.995969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:48.996195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:48.996210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:49.004811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:49.005016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:49.005031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:49.013687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:49.013910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:49.013926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:49.022535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:49.022747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:49.022762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:49.031418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:49.031644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:49.031659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:49.040372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:49.040604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:49.040619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:49.049285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:49.049500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:49.049514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:49.058178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:49.058437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:49.058454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:49.067095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:49.067335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:49.067350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:49.075908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:49.076132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:49.076147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:49.084731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:49.084952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:49.084967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:49.093609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:49.093865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:49.093881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:49.102482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:49.102749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:49.102764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.685 [2024-11-27 10:02:49.111404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.685 [2024-11-27 10:02:49.111507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.685 [2024-11-27 10:02:49.111522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.686 [2024-11-27 10:02:49.120261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.686 [2024-11-27 10:02:49.120534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.686 [2024-11-27 10:02:49.120549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.686 [2024-11-27 10:02:49.129149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.686 [2024-11-27 10:02:49.129371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.686 [2024-11-27 10:02:49.129386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.686 [2024-11-27 10:02:49.138000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.686 [2024-11-27 10:02:49.138260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.686 [2024-11-27 10:02:49.138283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.686 [2024-11-27 10:02:49.146831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.686 [2024-11-27 10:02:49.147093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.686 [2024-11-27 10:02:49.147108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.155716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.946 [2024-11-27 10:02:49.155943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.946 [2024-11-27 10:02:49.155958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.164535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.946 [2024-11-27 10:02:49.164773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.946 [2024-11-27 10:02:49.164788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.173384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.946 [2024-11-27 10:02:49.173598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.946 [2024-11-27 10:02:49.173613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.182235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.946 [2024-11-27 10:02:49.182464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.946 [2024-11-27 10:02:49.182482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.191114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.946 [2024-11-27 10:02:49.191353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.946 [2024-11-27 10:02:49.191368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.199917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.946 [2024-11-27 10:02:49.200133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.946 [2024-11-27 10:02:49.200148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.208803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.946 [2024-11-27 10:02:49.209027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.946 [2024-11-27 10:02:49.209042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.217650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.946 [2024-11-27 10:02:49.217869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.946 [2024-11-27 10:02:49.217884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.226546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.946 [2024-11-27 10:02:49.226837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.946 [2024-11-27 10:02:49.226853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.235378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.946 [2024-11-27 10:02:49.235597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.946 [2024-11-27 10:02:49.235612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.244275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.946 [2024-11-27 10:02:49.244516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.946 [2024-11-27 10:02:49.244531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.253098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.946 [2024-11-27 10:02:49.253352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.946 [2024-11-27 10:02:49.253368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.262003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.946 [2024-11-27 10:02:49.262222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.946 [2024-11-27 10:02:49.262237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.946 [2024-11-27 10:02:49.270866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.270966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.270981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.279722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.279994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.280010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.288616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.288830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.288845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.297512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.297749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.297764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.306422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.306668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.306683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.315345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.315562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.315577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.324272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.324503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.324519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.333212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.333441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.333456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.342259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.342539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.342555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.351150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.351433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.351449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.360044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.360260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.360275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.368943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.369203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.369218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.377768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.377968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.377983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.386641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.386863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.386878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.395526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.395779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.395794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:33.947 [2024-11-27 10:02:49.404360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:33.947 [2024-11-27 10:02:49.404623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.947 [2024-11-27 10:02:49.404638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.413223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.413467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.413486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.422079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.422347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.422362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.430977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.431222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.431238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 28690.00 IOPS, 112.07 MiB/s [2024-11-27T09:02:49.674Z] [2024-11-27 10:02:49.439860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.440112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.440127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.448760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.448986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.449000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.457617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.457844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.457859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.466569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.466787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.466802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.475453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.475667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.475681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.484328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.484566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.484582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.493259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.493515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.493530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.502174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.502377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.502391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.511018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.511274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.511289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.519893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.520104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.520119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.528781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.528886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.528901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.537704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.537925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.537940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.546529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.546747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.546762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.555380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.555589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.555604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.564293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.564557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.564572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.573166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.573381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.573396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.582090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.582336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.582351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.591050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.591300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.591316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.599883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.600126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.600141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.608757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.608997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.609012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.617692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.617909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.617925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.626592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.626701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.208 [2024-11-27 10:02:49.626716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.208 [2024-11-27 10:02:49.635494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.208 [2024-11-27 10:02:49.635720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.209 [2024-11-27 10:02:49.635735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.209 [2024-11-27 10:02:49.644430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.209 [2024-11-27 10:02:49.644643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.209 [2024-11-27 10:02:49.644661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.209 [2024-11-27 10:02:49.653264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.209 [2024-11-27 10:02:49.653489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.209 [2024-11-27 10:02:49.653505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.209 [2024-11-27 10:02:49.662156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.209 [2024-11-27 10:02:49.662413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.209 [2024-11-27 10:02:49.662429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.209 [2024-11-27 10:02:49.671052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.209 [2024-11-27 10:02:49.671269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.209 [2024-11-27 10:02:49.671284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.679939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.680169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.680185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.688847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.689055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.689071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.697799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.698007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.698021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.706716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.706912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.706927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.715615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.715873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.715888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.724502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.724772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.724787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.733438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.733685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.733699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.742286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.742510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.742525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.751226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.751452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.751468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.760130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.760380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.760395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.768972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.769192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.769207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.777843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.778063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.778079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.786716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.786914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.786929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.795557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.795786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.795801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.804461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.804691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.470 [2024-11-27 10:02:49.804706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.470 [2024-11-27 10:02:49.813324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.470 [2024-11-27 10:02:49.813606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.813621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.471 [2024-11-27 10:02:49.822221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.471 [2024-11-27 10:02:49.822441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.822457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.471 [2024-11-27 10:02:49.831129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.471 [2024-11-27 10:02:49.831391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.831406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.471 [2024-11-27 10:02:49.839962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.471 [2024-11-27 10:02:49.840070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.840086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.471 [2024-11-27 10:02:49.848854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.471 [2024-11-27 10:02:49.849096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.849112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.471 [2024-11-27 10:02:49.857726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.471 [2024-11-27 10:02:49.857971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.857986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.471 [2024-11-27 10:02:49.866576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.471 [2024-11-27 10:02:49.866791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.866806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.471 [2024-11-27 10:02:49.875448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.471 [2024-11-27 10:02:49.875676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.875694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.471 [2024-11-27 10:02:49.884365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.471 [2024-11-27 10:02:49.884591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.884606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.471 [2024-11-27 10:02:49.893198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.471 [2024-11-27 10:02:49.893491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.893507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.471 [2024-11-27 10:02:49.902064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.471 [2024-11-27 10:02:49.902265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.902280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.471 [2024-11-27 10:02:49.911043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.471 [2024-11-27 10:02:49.911245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.911261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.471 [2024-11-27 10:02:49.919923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.471 [2024-11-27 10:02:49.920156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.920175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.471 [2024-11-27 10:02:49.928793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.471 [2024-11-27 10:02:49.929018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.471 [2024-11-27 10:02:49.929033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:49.937682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:49.937907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:49.937923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:49.946638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:49.946869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:49.946884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:49.955556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:49.955794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:49.955809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:49.964430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:49.964631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:49.964646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:49.973343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:49.973586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:49.973602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:49.982194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:49.982403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:49.982418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:49.991061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:49.991307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:49.991323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:49.999940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.000168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.000183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.009267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.009490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.009507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.018701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.018984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.018999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.027518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.027792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.027807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.036384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.036801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.036817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.045577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.045857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.045879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.054434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.054680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.054695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.063305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.063591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.063607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.072181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.072401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.072416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.081028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.081266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.081282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.089888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.090146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.090166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.098710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.098811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.098826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.107594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.107695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.107715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.116505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.116776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.116791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.125345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.733 [2024-11-27 10:02:50.125613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.733 [2024-11-27 10:02:50.125628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.733 [2024-11-27 10:02:50.134195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.734 [2024-11-27 10:02:50.134434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-27 10:02:50.134449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.734 [2024-11-27 10:02:50.143046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.734 [2024-11-27 10:02:50.143284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-27 10:02:50.143300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.734 [2024-11-27 10:02:50.151912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.734 [2024-11-27 10:02:50.152113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-27 10:02:50.152128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.734 [2024-11-27 10:02:50.160764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.734 [2024-11-27 10:02:50.161026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-27 10:02:50.161041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.734 [2024-11-27 10:02:50.169616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.734 [2024-11-27 10:02:50.169723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-27 10:02:50.169738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.734 [2024-11-27 10:02:50.178496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.734 [2024-11-27 10:02:50.178747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-27 10:02:50.178762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.734 [2024-11-27 10:02:50.187425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.734 [2024-11-27 10:02:50.187645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-27 10:02:50.187661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.734 [2024-11-27 10:02:50.196334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.734 [2024-11-27 10:02:50.196572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.734 [2024-11-27 10:02:50.196587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.205218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.205458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.205474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.214059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.214298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.214313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.222938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.223208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.223224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.231778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.232052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.232066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.240631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.240845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.240859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.249462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.249725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.249740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.258290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.258536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.258552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.267090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.267385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.267401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.275948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.276191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.276206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.284838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.285100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.285114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.293710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.293990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.294006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.302610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.302810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.302825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.311492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.311783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.311799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.320295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.320537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.320552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.329110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.329217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.329232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.337953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.338171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.338189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.346963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.347227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.347242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.355815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.356072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.356093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.364679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.364922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.364937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.373579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.996 [2024-11-27 10:02:50.373795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.996 [2024-11-27 10:02:50.373810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.996 [2024-11-27 10:02:50.382480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.997 [2024-11-27 10:02:50.382720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.997 [2024-11-27 10:02:50.382735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.997 [2024-11-27 10:02:50.391396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.997 [2024-11-27 10:02:50.391591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.997 [2024-11-27 10:02:50.391606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.997 [2024-11-27 10:02:50.400249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.997 [2024-11-27 10:02:50.400477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.997 [2024-11-27 10:02:50.400492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.997 [2024-11-27 10:02:50.409115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.997 [2024-11-27 10:02:50.409342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.997 [2024-11-27 10:02:50.409357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.997 [2024-11-27 10:02:50.417902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.997 [2024-11-27 10:02:50.418150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.997 [2024-11-27 10:02:50.418168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.997 [2024-11-27 10:02:50.426807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.997 [2024-11-27 10:02:50.427059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.997 [2024-11-27 10:02:50.427081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.997 [2024-11-27 10:02:50.435662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9520) with pdu=0x2000166f7da8 00:30:34.997 [2024-11-27 10:02:50.435919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.997 [2024-11-27 10:02:50.435934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:34.997 28720.50 IOPS, 112.19 MiB/s 00:30:34.997 Latency(us) 00:30:34.997 [2024-11-27T09:02:50.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.997 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:34.997 nvme0n1 : 2.01 28722.76 112.20 0.00 0.00 4448.92 2252.80 9448.11 00:30:34.997 [2024-11-27T09:02:50.463Z] =================================================================================================================== 00:30:34.997 [2024-11-27T09:02:50.463Z] Total : 28722.76 112.20 0.00 0.00 4448.92 2252.80 9448.11 00:30:34.997 { 00:30:34.997 "results": [ 00:30:34.997 { 00:30:34.997 "job": "nvme0n1", 00:30:34.997 "core_mask": "0x2", 00:30:34.997 "workload": "randwrite", 00:30:34.997 "status": "finished", 00:30:34.997 "queue_depth": 128, 00:30:34.997 "io_size": 4096, 00:30:34.997 "runtime": 2.005413, 00:30:34.997 "iops": 28722.76184506633, 00:30:34.997 "mibps": 112.19828845729035, 00:30:34.997 "io_failed": 0, 00:30:34.997 "io_timeout": 0, 00:30:34.997 "avg_latency_us": 4448.919265522011, 00:30:34.997 "min_latency_us": 2252.8, 00:30:34.997 "max_latency_us": 9448.106666666667 00:30:34.997 } 00:30:34.997 ], 00:30:34.997 "core_count": 1 00:30:34.997 } 00:30:35.258 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:35.258 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:35.258 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:35.258 | .driver_specific 00:30:35.258 | .nvme_error 00:30:35.258 | .status_code 00:30:35.258 | .command_transient_transport_error' 00:30:35.258 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:35.258 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 225 > 0 )) 00:30:35.258 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4058133 00:30:35.258 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4058133 ']' 00:30:35.258 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4058133 00:30:35.258 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:35.258 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.258 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4058133 00:30:35.518 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:35.518 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4058133' 00:30:35.519 killing process with pid 4058133 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4058133 00:30:35.519 Received shutdown signal, test time was about 2.000000 seconds 00:30:35.519 00:30:35.519 Latency(us) 00:30:35.519 [2024-11-27T09:02:50.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.519 [2024-11-27T09:02:50.985Z] =================================================================================================================== 00:30:35.519 [2024-11-27T09:02:50.985Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4058133 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4058820 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4058820 /var/tmp/bperf.sock 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4058820 ']' 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:35.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.519 10:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:35.519 [2024-11-27 10:02:50.881461] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:35.519 [2024-11-27 10:02:50.881519] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058820 ] 00:30:35.519 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:35.519 Zero copy mechanism will not be used. 00:30:35.519 [2024-11-27 10:02:50.964957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.779 [2024-11-27 10:02:50.994506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.350 10:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.350 10:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:36.350 10:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:36.350 10:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:36.610 10:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:36.610 10:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.610 10:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:36.610 10:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.610 10:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:36.610 10:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:36.870 nvme0n1 00:30:36.870 10:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:36.870 10:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.870 10:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:36.870 10:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.870 10:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:36.870 10:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:37.130 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:37.130 Zero copy mechanism will not be used. 00:30:37.130 Running I/O for 2 seconds... 00:30:37.130 [2024-11-27 10:02:52.353365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.353986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.354012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.362733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.362878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.362897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.372633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.372931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.372950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.383041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.383259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.383276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.393334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.393553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.393570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.403265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.403584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.403609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.413128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.413342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.413360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.422231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.422520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.422537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.431915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.432093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.432109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.440846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.441094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.441110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.448210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.448469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.448486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.455820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.456164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.456180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.465618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.465910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.465926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.475333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.475521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.475542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.485767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.486058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.486074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.495825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.496115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.496131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.506108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.506310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.506326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.516333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.516633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.516649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.526823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.527188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.527205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.536603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.536942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.536958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.546802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.546928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.546944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.556613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.556867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.556883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.566522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.566771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.566787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.576216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.576409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.576426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.585024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.585197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.585212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.130 [2024-11-27 10:02:52.591982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.130 [2024-11-27 10:02:52.592385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.130 [2024-11-27 10:02:52.592401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.391 [2024-11-27 10:02:52.599628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.391 [2024-11-27 10:02:52.599989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.391 [2024-11-27 10:02:52.600005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.391 [2024-11-27 10:02:52.609057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.391 [2024-11-27 10:02:52.609407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.391 [2024-11-27 10:02:52.609423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.391 [2024-11-27 10:02:52.617520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.391 [2024-11-27 10:02:52.617702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.391 [2024-11-27 10:02:52.617717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.391 [2024-11-27 10:02:52.624976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.391 [2024-11-27 10:02:52.625040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.391 [2024-11-27 10:02:52.625054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.391 [2024-11-27 10:02:52.628679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.628828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.628843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.634699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.634757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.634773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.638645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.638700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.638715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.644034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.644180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.644195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.648941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.649084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.649099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.652691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.652856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.652871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.656291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.656427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.656442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.659688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.659852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.659867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.666836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.667059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.667074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.669836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.669998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.670016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.672644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.672791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.672807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.675740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.675944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.675959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.679400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.679619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.679634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.682984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.683151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.683171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.686475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.686654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.686670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.690043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.690276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.690291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.693546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.693731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.693746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.697130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.697322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.697337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.700624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.700794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.700809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.704103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.704280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.704296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.707594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.707755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.707769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.710990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.711136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.711152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.714282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.714459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.714474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.717679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.717841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.717856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.720983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.721118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.721133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.724059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.724211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.724227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.729572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.392 [2024-11-27 10:02:52.729859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.392 [2024-11-27 10:02:52.729875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.392 [2024-11-27 10:02:52.739091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.739410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.739426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.748843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.749021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.749036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.759188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.759436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.759450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.769091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.769316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.769331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.779284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.779513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.779528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.789804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.789987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.790001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.799123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.799377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.799392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.802994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.803123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.803138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.806138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.806256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.806274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.809662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.809781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.809796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.813155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.813356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.813371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.816552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.816691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.816705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.820774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.820901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.820916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.828362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.828462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.828478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.835248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.835556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.835572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.842569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.842702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.842718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.846156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.846286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.846301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.849621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.849767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.849782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.393 [2024-11-27 10:02:52.852966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.393 [2024-11-27 10:02:52.853092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.393 [2024-11-27 10:02:52.853107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.655 [2024-11-27 10:02:52.856342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.655 [2024-11-27 10:02:52.856460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-11-27 10:02:52.856475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.655 [2024-11-27 10:02:52.859800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.655 [2024-11-27 10:02:52.859930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-11-27 10:02:52.859946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.655 [2024-11-27 10:02:52.864904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.655 [2024-11-27 10:02:52.865327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-11-27 10:02:52.865343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.655 [2024-11-27 10:02:52.873186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.655 [2024-11-27 10:02:52.873415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-11-27 10:02:52.873431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.655 [2024-11-27 10:02:52.881288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.655 [2024-11-27 10:02:52.881397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-11-27 10:02:52.881413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.655 [2024-11-27 10:02:52.888904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.655 [2024-11-27 10:02:52.889265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-11-27 10:02:52.889282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.655 [2024-11-27 10:02:52.892825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.655 [2024-11-27 10:02:52.892963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-11-27 10:02:52.892978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.655 [2024-11-27 10:02:52.896009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.655 [2024-11-27 10:02:52.896195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-11-27 10:02:52.896210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.655 [2024-11-27 10:02:52.899383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.655 [2024-11-27 10:02:52.899517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.899532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.904525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.904654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.904669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.909217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.909283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.909298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.914980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.915292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.915308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.923319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.923534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.923549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.928815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.928939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.928954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.932103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.932227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.932243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.935256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.935378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.935396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.938710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.938831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.938846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.942612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.942896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.942911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.950864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.950993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.951008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.954869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.955087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.955102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.959355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.959608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.959623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.965842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.965896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.965911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.972017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.972314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.972330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.980204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.980330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.980345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.985322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.985472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.985487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.989712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.989842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.989857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.992687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.992819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.992834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.995640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.995779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.995794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:52.998481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:52.998605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:52.998620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:53.001432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:53.001574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:53.001589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:53.004216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:53.004348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:53.004363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:53.010927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:53.011148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:53.011169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:53.016769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:53.016910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:53.016926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:53.021683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:53.022051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:53.022068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:53.025383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:53.025647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-11-27 10:02:53.025663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.656 [2024-11-27 10:02:53.029188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.656 [2024-11-27 10:02:53.029548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.029565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.035320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.035591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.035607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.044028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.044199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.044215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.048560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.048729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.048745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.055499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.055877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.055893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.062331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.062475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.062491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.065492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.065631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.065650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.069003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.069143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.069164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.072445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.072643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.072659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.079529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.079775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.079792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.085280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.085422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.085438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.088507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.088651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.088667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.091674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.091814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.091830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.094966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.095105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.095120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.097975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.098115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.098131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.100809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.100954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.100970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.103805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.103945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.103961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.106611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.106752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.106767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.109500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.109640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.109656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.112124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.112259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.112275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.114809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.114941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.114956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.657 [2024-11-27 10:02:53.117435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.657 [2024-11-27 10:02:53.117564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-11-27 10:02:53.117580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.919 [2024-11-27 10:02:53.120044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.919 [2024-11-27 10:02:53.120183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.919 [2024-11-27 10:02:53.120198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.919 [2024-11-27 10:02:53.123066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.919 [2024-11-27 10:02:53.123233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.919 [2024-11-27 10:02:53.123249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.128395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.128702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.128718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.138043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.138254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.138269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.147725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.147965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.147980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.158232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.158356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.158371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.168902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.169121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.169136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.179467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.179691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.179705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.189585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.189789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.189805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.199816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.200053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.200068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.209693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.209919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.209937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.220064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.220321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.220336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.230492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.230653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.230669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.240451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.240675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.240691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.248321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.248380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.248396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.251414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.251479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.251495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.254537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.254586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.254601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.257823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.257871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.257887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.260844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.260898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.260913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.263903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.263961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.263976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.269744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.270000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.270017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.277339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.277583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.277599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.284476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.284545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.284560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.292724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.292996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.293012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.300637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.300930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.300945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.309852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.309941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.309956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.317922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.317968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.920 [2024-11-27 10:02:53.317983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.920 [2024-11-27 10:02:53.322109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.920 [2024-11-27 10:02:53.322189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.921 [2024-11-27 10:02:53.322205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.921 [2024-11-27 10:02:53.331378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.921 [2024-11-27 10:02:53.331586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.921 [2024-11-27 10:02:53.331601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.921 [2024-11-27 10:02:53.340526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.921 [2024-11-27 10:02:53.340783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.921 [2024-11-27 10:02:53.340798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.921 5044.00 IOPS, 630.50 MiB/s [2024-11-27T09:02:53.387Z] [2024-11-27 10:02:53.351123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.921 [2024-11-27 10:02:53.351385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.921 [2024-11-27 10:02:53.351402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.921 [2024-11-27 10:02:53.359041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.921 [2024-11-27 10:02:53.359320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.921 [2024-11-27 10:02:53.359335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.921 [2024-11-27 10:02:53.369241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.921 [2024-11-27 10:02:53.369517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.921 [2024-11-27 10:02:53.369531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.921 [2024-11-27 10:02:53.379641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:37.921 [2024-11-27 10:02:53.379733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.921 [2024-11-27 10:02:53.379749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.389768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.389976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.389991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.400190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.400516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.400532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.410586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.410893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.410912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.420697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.420843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.420858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.431115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.431304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.431320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.441595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.441769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.441784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.452012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.452101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.452116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.462225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.462439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.462455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.472330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.472554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.472570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.480801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.481084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.481100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.491079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.491362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.491377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.500866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.501052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.501067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.510415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.510696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.510712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.514756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.183 [2024-11-27 10:02:53.514804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.183 [2024-11-27 10:02:53.514819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.183 [2024-11-27 10:02:53.517804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.517848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.517863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.520888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.520933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.520949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.523970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.524017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.524032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.527024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.527074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.527089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.529928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.529973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.529988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.532642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.532689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.532704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.535378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.535430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.535446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.538076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.538126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.538140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.540741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.540789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.540804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.543372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.543417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.543432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.546010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.546052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.546068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.548703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.548750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.548766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.551701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.551752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.551768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.554434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.554485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.554500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.557679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.557785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.557802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.560458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.560514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.560529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.563028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.563081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.563096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.566004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.566048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.566063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.572666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.572891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.572906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.576030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.576083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.576098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.579689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.579754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.579769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.586518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.586770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.586785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.595359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.595609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.595623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.603040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.603103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.603118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.609443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.609504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.609520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.618568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.618619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.618634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.624865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.624929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.184 [2024-11-27 10:02:53.624944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.184 [2024-11-27 10:02:53.627873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.184 [2024-11-27 10:02:53.627920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.185 [2024-11-27 10:02:53.627935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.185 [2024-11-27 10:02:53.630755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.185 [2024-11-27 10:02:53.630811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.185 [2024-11-27 10:02:53.630826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.185 [2024-11-27 10:02:53.633622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.185 [2024-11-27 10:02:53.633665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.185 [2024-11-27 10:02:53.633680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.185 [2024-11-27 10:02:53.636530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.185 [2024-11-27 10:02:53.636593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.185 [2024-11-27 10:02:53.636609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.185 [2024-11-27 10:02:53.639227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.185 [2024-11-27 10:02:53.639274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.185 [2024-11-27 10:02:53.639289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.185 [2024-11-27 10:02:53.641914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.185 [2024-11-27 10:02:53.641955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.185 [2024-11-27 10:02:53.641970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.185 [2024-11-27 10:02:53.644605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.185 [2024-11-27 10:02:53.644652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.185 [2024-11-27 10:02:53.644667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.185 [2024-11-27 10:02:53.647334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.185 [2024-11-27 10:02:53.647387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.185 [2024-11-27 10:02:53.647402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.446 [2024-11-27 10:02:53.650006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.446 [2024-11-27 10:02:53.650061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.446 [2024-11-27 10:02:53.650076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.446 [2024-11-27 10:02:53.652677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.446 [2024-11-27 10:02:53.652723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.446 [2024-11-27 10:02:53.652738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.446 [2024-11-27 10:02:53.655341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.446 [2024-11-27 10:02:53.655389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.446 [2024-11-27 10:02:53.655404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.446 [2024-11-27 10:02:53.658079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.446 [2024-11-27 10:02:53.658128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.446 [2024-11-27 10:02:53.658143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.446 [2024-11-27 10:02:53.661784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.446 [2024-11-27 10:02:53.661856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.446 [2024-11-27 10:02:53.661871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.446 [2024-11-27 10:02:53.667758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.446 [2024-11-27 10:02:53.668025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.446 [2024-11-27 10:02:53.668043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.446 [2024-11-27 10:02:53.675721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.446 [2024-11-27 10:02:53.675783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.446 [2024-11-27 10:02:53.675798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.446 [2024-11-27 10:02:53.680152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.446 [2024-11-27 10:02:53.680212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.446 [2024-11-27 10:02:53.680227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.446 [2024-11-27 10:02:53.682924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.446 [2024-11-27 10:02:53.682972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.446 [2024-11-27 10:02:53.682987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.446 [2024-11-27 10:02:53.685596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.446 [2024-11-27 10:02:53.685645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.446 [2024-11-27 10:02:53.685660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.446 [2024-11-27 10:02:53.688249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.446 [2024-11-27 10:02:53.688295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.688311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.690882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.690929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.690944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.693562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.693606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.693621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.696195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.696240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.696255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.698797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.698851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.698866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.701545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.701613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.701628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.706528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.706589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.706604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.712053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.712126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.712141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.717290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.717345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.717360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.720214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.720292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.720307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.724226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.724449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.724465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.734555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.734834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.734851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.744293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.744518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.744534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.755148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.755518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.755534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.764903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.765129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.765145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.775114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.775370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.775386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.785601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.785875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.785891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.795536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.795782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.795798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.805502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.805843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.805859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.816006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.816203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.816219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.825507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.825761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.825776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.835642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.835894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.835912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.845717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.846062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.846078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.855937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.856144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.856165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.865923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.866176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.866192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.876154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.876425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.876440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.886037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.886319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.447 [2024-11-27 10:02:53.886335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.447 [2024-11-27 10:02:53.896246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.447 [2024-11-27 10:02:53.896497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.448 [2024-11-27 10:02:53.896512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.448 [2024-11-27 10:02:53.906901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.448 [2024-11-27 10:02:53.907144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.448 [2024-11-27 10:02:53.907164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.708 [2024-11-27 10:02:53.917783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.708 [2024-11-27 10:02:53.917872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.708 [2024-11-27 10:02:53.917888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.708 [2024-11-27 10:02:53.927999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.708 [2024-11-27 10:02:53.928251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.708 [2024-11-27 10:02:53.928268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.708 [2024-11-27 10:02:53.937987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.708 [2024-11-27 10:02:53.938313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.708 [2024-11-27 10:02:53.938330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.708 [2024-11-27 10:02:53.948418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.708 [2024-11-27 10:02:53.948639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.708 [2024-11-27 10:02:53.948654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.708 [2024-11-27 10:02:53.959104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.708 [2024-11-27 10:02:53.959377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.708 [2024-11-27 10:02:53.959393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.708 [2024-11-27 10:02:53.969704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.708 [2024-11-27 10:02:53.970007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.708 [2024-11-27 10:02:53.970023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.708 [2024-11-27 10:02:53.980419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.708 [2024-11-27 10:02:53.980656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.708 [2024-11-27 10:02:53.980672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.708 [2024-11-27 10:02:53.991253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.708 [2024-11-27 10:02:53.991456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.708 [2024-11-27 10:02:53.991472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.708 [2024-11-27 10:02:54.001410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.708 [2024-11-27 10:02:54.001578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.708 [2024-11-27 10:02:54.001593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.011764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.011962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.011977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.021962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.022197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.022212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.032588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.032848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.032863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.042267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.042364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.042379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.052163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.052289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.052304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.062517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.062774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.062791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.072551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.072809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.072824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.081982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.082330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.082346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.092518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.092616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.092631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.102377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.102467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.102485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.112809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.113074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.113099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.122756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.122943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.122958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.132505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.132748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.132764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.142636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.142911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.142926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.152712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.152980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.152996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.162783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.163039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.163054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.709 [2024-11-27 10:02:54.172565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.709 [2024-11-27 10:02:54.172829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.709 [2024-11-27 10:02:54.172844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.182337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.182590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.182606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.192229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.192438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.192454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.202053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.202317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.202332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.212366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.212622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.212637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.222917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.223182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.223197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.233052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.233320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.233336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.243276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.243481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.243496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.253769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.253989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.254004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.263899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.264103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.264118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.274279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.274540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.274556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.283949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.284192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.284207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.294386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.294620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.294635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.304887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.305166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.305183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.315150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.315404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.315419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.325248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.325433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.325448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.335444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.335572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.335588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.970 [2024-11-27 10:02:54.344947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.345172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.345188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.970 4611.50 IOPS, 576.44 MiB/s [2024-11-27T09:02:54.436Z] [2024-11-27 10:02:54.355330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f9860) with pdu=0x2000166ff3c8 00:30:38.970 [2024-11-27 10:02:54.355461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.970 [2024-11-27 10:02:54.355476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.970 00:30:38.970 Latency(us) 00:30:38.970 [2024-11-27T09:02:54.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.970 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:38.970 nvme0n1 : 2.01 4605.18 575.65 0.00 0.00 3466.98 1242.45 13325.65 00:30:38.970 [2024-11-27T09:02:54.436Z] =================================================================================================================== 00:30:38.970 [2024-11-27T09:02:54.436Z] Total : 4605.18 575.65 0.00 0.00 3466.98 1242.45 13325.65 00:30:38.970 { 00:30:38.971 "results": [ 00:30:38.971 { 00:30:38.971 "job": "nvme0n1", 00:30:38.971 "core_mask": "0x2", 00:30:38.971 "workload": "randwrite", 00:30:38.971 "status": "finished", 00:30:38.971 "queue_depth": 16, 00:30:38.971 "io_size": 131072, 00:30:38.971 "runtime": 2.006871, 00:30:38.971 "iops": 4605.1789078620395, 00:30:38.971 "mibps": 575.6473634827549, 00:30:38.971 "io_failed": 0, 00:30:38.971 "io_timeout": 0, 00:30:38.971 "avg_latency_us": 3466.978287527952, 00:30:38.971 "min_latency_us": 1242.4533333333334, 00:30:38.971 "max_latency_us": 13325.653333333334 00:30:38.971 } 00:30:38.971 ], 00:30:38.971 "core_count": 1 00:30:38.971 } 00:30:38.971 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:38.971 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:38.971 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:38.971 | .driver_specific 00:30:38.971 | .nvme_error 00:30:38.971 | .status_code 00:30:38.971 | .command_transient_transport_error' 00:30:38.971 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:39.244 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 299 > 0 )) 00:30:39.244 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4058820 00:30:39.244 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4058820 ']' 00:30:39.244 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4058820 00:30:39.244 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:39.244 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.244 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4058820 00:30:39.244 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:39.244 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:39.244 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4058820' 00:30:39.244 killing process with pid 4058820 00:30:39.244 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4058820 00:30:39.244 Received shutdown signal, test time was about 2.000000 seconds 00:30:39.244 00:30:39.244 Latency(us) 00:30:39.244 [2024-11-27T09:02:54.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.244 [2024-11-27T09:02:54.710Z] =================================================================================================================== 00:30:39.244 [2024-11-27T09:02:54.710Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:39.244 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4058820 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4056421 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4056421 ']' 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4056421 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4056421 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4056421' 00:30:39.505 killing process with pid 4056421 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4056421 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4056421 00:30:39.505 00:30:39.505 real 0m16.529s 00:30:39.505 user 0m32.596s 00:30:39.505 sys 0m3.765s 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:39.505 ************************************ 00:30:39.505 END TEST nvmf_digest_error 00:30:39.505 ************************************ 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:39.505 10:02:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:39.505 rmmod nvme_tcp 00:30:39.766 rmmod nvme_fabrics 00:30:39.766 rmmod nvme_keyring 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 4056421 ']' 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 4056421 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 4056421 ']' 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 4056421 00:30:39.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4056421) - No such process 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 4056421 is not found' 00:30:39.766 Process with pid 4056421 is not found 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.766 10:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.681 10:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:41.681 00:30:41.681 real 0m42.883s 00:30:41.681 user 1m7.346s 00:30:41.681 sys 0m13.402s 00:30:41.681 10:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:41.681 10:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:41.681 ************************************ 00:30:41.681 END TEST nvmf_digest 00:30:41.681 ************************************ 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.942 ************************************ 00:30:41.942 START TEST nvmf_bdevperf 00:30:41.942 ************************************ 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:41.942 * Looking for test storage... 00:30:41.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:41.942 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:41.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.943 --rc genhtml_branch_coverage=1 00:30:41.943 --rc genhtml_function_coverage=1 00:30:41.943 --rc genhtml_legend=1 00:30:41.943 --rc geninfo_all_blocks=1 00:30:41.943 --rc geninfo_unexecuted_blocks=1 00:30:41.943 00:30:41.943 ' 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:41.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.943 --rc genhtml_branch_coverage=1 00:30:41.943 --rc genhtml_function_coverage=1 00:30:41.943 --rc genhtml_legend=1 00:30:41.943 --rc geninfo_all_blocks=1 00:30:41.943 --rc geninfo_unexecuted_blocks=1 00:30:41.943 00:30:41.943 ' 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:41.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.943 --rc genhtml_branch_coverage=1 00:30:41.943 --rc genhtml_function_coverage=1 00:30:41.943 --rc genhtml_legend=1 00:30:41.943 --rc geninfo_all_blocks=1 00:30:41.943 --rc geninfo_unexecuted_blocks=1 00:30:41.943 00:30:41.943 ' 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:41.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.943 --rc genhtml_branch_coverage=1 00:30:41.943 --rc genhtml_function_coverage=1 00:30:41.943 --rc genhtml_legend=1 00:30:41.943 --rc geninfo_all_blocks=1 00:30:41.943 --rc geninfo_unexecuted_blocks=1 00:30:41.943 00:30:41.943 ' 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.943 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:42.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:42.206 10:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:50.353 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:50.353 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.353 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:50.353 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:50.354 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:30:50.354 00:30:50.354 --- 10.0.0.2 ping statistics --- 00:30:50.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.354 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:30:50.354 00:30:50.354 --- 10.0.0.1 ping statistics --- 00:30:50.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.354 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:50.354 10:03:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4063950 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4063950 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 4063950 ']' 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.354 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.354 [2024-11-27 10:03:05.079667] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:50.354 [2024-11-27 10:03:05.079733] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.354 [2024-11-27 10:03:05.180302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:50.354 [2024-11-27 10:03:05.232645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.354 [2024-11-27 10:03:05.232695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.354 [2024-11-27 10:03:05.232704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.354 [2024-11-27 10:03:05.232711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.354 [2024-11-27 10:03:05.232717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.354 [2024-11-27 10:03:05.234840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.354 [2024-11-27 10:03:05.234997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.354 [2024-11-27 10:03:05.234999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.617 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:50.617 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:50.617 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:50.617 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:50.617 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.617 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.617 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.617 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.617 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.617 [2024-11-27 10:03:05.960949] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.617 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.617 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:50.617 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.617 10:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.617 Malloc0 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.617 [2024-11-27 10:03:06.042810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:50.617 { 00:30:50.617 "params": { 00:30:50.617 "name": "Nvme$subsystem", 00:30:50.617 "trtype": "$TEST_TRANSPORT", 00:30:50.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.617 "adrfam": "ipv4", 00:30:50.617 "trsvcid": "$NVMF_PORT", 00:30:50.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.617 "hdgst": ${hdgst:-false}, 00:30:50.617 "ddgst": ${ddgst:-false} 00:30:50.617 }, 00:30:50.617 "method": "bdev_nvme_attach_controller" 00:30:50.617 } 00:30:50.617 EOF 00:30:50.617 )") 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:50.617 10:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:50.617 "params": { 00:30:50.617 "name": "Nvme1", 00:30:50.617 "trtype": "tcp", 00:30:50.617 "traddr": "10.0.0.2", 00:30:50.617 "adrfam": "ipv4", 00:30:50.617 "trsvcid": "4420", 00:30:50.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.617 "hdgst": false, 00:30:50.617 "ddgst": false 00:30:50.617 }, 00:30:50.617 "method": "bdev_nvme_attach_controller" 00:30:50.617 }' 00:30:50.879 [2024-11-27 10:03:06.103975] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:50.880 [2024-11-27 10:03:06.104041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064093 ] 00:30:50.880 [2024-11-27 10:03:06.196440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.880 [2024-11-27 10:03:06.249870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.141 Running I/O for 1 seconds... 00:30:52.084 8690.00 IOPS, 33.95 MiB/s 00:30:52.084 Latency(us) 00:30:52.084 [2024-11-27T09:03:07.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.084 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:52.084 Verification LBA range: start 0x0 length 0x4000 00:30:52.084 Nvme1n1 : 1.01 8737.90 34.13 0.00 0.00 14580.36 2266.45 13926.40 00:30:52.084 [2024-11-27T09:03:07.550Z] =================================================================================================================== 00:30:52.084 [2024-11-27T09:03:07.550Z] Total : 8737.90 34.13 0.00 0.00 14580.36 2266.45 13926.40 00:30:52.345 10:03:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4064332 00:30:52.345 10:03:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:52.345 10:03:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:52.345 10:03:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:52.345 10:03:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:52.345 10:03:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:52.345 10:03:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:52.345 10:03:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:52.345 { 00:30:52.345 "params": { 00:30:52.345 "name": "Nvme$subsystem", 00:30:52.345 "trtype": "$TEST_TRANSPORT", 00:30:52.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.345 "adrfam": "ipv4", 00:30:52.345 "trsvcid": "$NVMF_PORT", 00:30:52.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.346 "hdgst": ${hdgst:-false}, 00:30:52.346 "ddgst": ${ddgst:-false} 00:30:52.346 }, 00:30:52.346 "method": "bdev_nvme_attach_controller" 00:30:52.346 } 00:30:52.346 EOF 00:30:52.346 )") 00:30:52.346 10:03:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:52.346 10:03:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:52.346 10:03:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:52.346 10:03:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:52.346 "params": { 00:30:52.346 "name": "Nvme1", 00:30:52.346 "trtype": "tcp", 00:30:52.346 "traddr": "10.0.0.2", 00:30:52.346 "adrfam": "ipv4", 00:30:52.346 "trsvcid": "4420", 00:30:52.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:52.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:52.346 "hdgst": false, 00:30:52.346 "ddgst": false 00:30:52.346 }, 00:30:52.346 "method": "bdev_nvme_attach_controller" 00:30:52.346 }' 00:30:52.346 [2024-11-27 10:03:07.665567] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:52.346 [2024-11-27 10:03:07.665641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064332 ] 00:30:52.346 [2024-11-27 10:03:07.756373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.346 [2024-11-27 10:03:07.792121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.607 Running I/O for 15 seconds... 00:30:54.934 10018.00 IOPS, 39.13 MiB/s [2024-11-27T09:03:10.664Z] 10538.50 IOPS, 41.17 MiB/s [2024-11-27T09:03:10.664Z] 10:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4063950 00:30:55.198 10:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:55.198 [2024-11-27 10:03:10.627631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.627981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.627988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.628000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.198 [2024-11-27 10:03:10.628007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.628017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.198 [2024-11-27 10:03:10.628025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.628035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.198 [2024-11-27 10:03:10.628042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.628052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.198 [2024-11-27 10:03:10.628060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.628070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.198 [2024-11-27 10:03:10.628077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.198 [2024-11-27 10:03:10.628088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.198 [2024-11-27 10:03:10.628096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.199 [2024-11-27 10:03:10.628114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.199 [2024-11-27 10:03:10.628131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.199 [2024-11-27 10:03:10.628148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.199 [2024-11-27 10:03:10.628851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.199 [2024-11-27 10:03:10.628861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.628868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.628877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.628885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.628894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.628901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.628911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.628918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.628927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.628935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.628944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.628957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.628966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.628974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.628983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.628991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.200 [2024-11-27 10:03:10.629458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.200 [2024-11-27 10:03:10.629474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.200 [2024-11-27 10:03:10.629491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.200 [2024-11-27 10:03:10.629509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.200 [2024-11-27 10:03:10.629526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.200 [2024-11-27 10:03:10.629543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.200 [2024-11-27 10:03:10.629553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.201 [2024-11-27 10:03:10.629730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.201 [2024-11-27 10:03:10.629989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.629998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308450 is same with the state(6) to be set 00:30:55.201 [2024-11-27 10:03:10.630007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.201 [2024-11-27 10:03:10.630013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.201 [2024-11-27 10:03:10.630021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95200 len:8 PRP1 0x0 PRP2 0x0 00:30:55.201 [2024-11-27 10:03:10.630030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.201 [2024-11-27 10:03:10.633627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.201 [2024-11-27 10:03:10.633683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.201 [2024-11-27 10:03:10.634528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.201 [2024-11-27 10:03:10.634566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.201 [2024-11-27 10:03:10.634577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.201 [2024-11-27 10:03:10.634814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.201 [2024-11-27 10:03:10.635035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.201 [2024-11-27 10:03:10.635044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.201 [2024-11-27 10:03:10.635054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.201 [2024-11-27 10:03:10.635064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.201 [2024-11-27 10:03:10.647821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.201 [2024-11-27 10:03:10.648463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.201 [2024-11-27 10:03:10.648502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.201 [2024-11-27 10:03:10.648515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.201 [2024-11-27 10:03:10.648752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.201 [2024-11-27 10:03:10.648973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.201 [2024-11-27 10:03:10.648982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.201 [2024-11-27 10:03:10.648991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.201 [2024-11-27 10:03:10.648999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.201 [2024-11-27 10:03:10.661570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.464 [2024-11-27 10:03:10.662183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-11-27 10:03:10.662222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.464 [2024-11-27 10:03:10.662235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.464 [2024-11-27 10:03:10.662476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.464 [2024-11-27 10:03:10.662697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.464 [2024-11-27 10:03:10.662707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.464 [2024-11-27 10:03:10.662715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.464 [2024-11-27 10:03:10.662723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.464 [2024-11-27 10:03:10.675495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.464 [2024-11-27 10:03:10.675997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-11-27 10:03:10.676016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.464 [2024-11-27 10:03:10.676029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.464 [2024-11-27 10:03:10.676252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.464 [2024-11-27 10:03:10.676470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.464 [2024-11-27 10:03:10.676478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.464 [2024-11-27 10:03:10.676486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.465 [2024-11-27 10:03:10.676493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.465 [2024-11-27 10:03:10.689356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.465 [2024-11-27 10:03:10.689816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-11-27 10:03:10.689856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.465 [2024-11-27 10:03:10.689868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.465 [2024-11-27 10:03:10.690104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.465 [2024-11-27 10:03:10.690338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.465 [2024-11-27 10:03:10.690349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.465 [2024-11-27 10:03:10.690357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.465 [2024-11-27 10:03:10.690366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.465 [2024-11-27 10:03:10.703115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.465 [2024-11-27 10:03:10.703668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-11-27 10:03:10.703688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.465 [2024-11-27 10:03:10.703696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.465 [2024-11-27 10:03:10.703912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.465 [2024-11-27 10:03:10.704129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.465 [2024-11-27 10:03:10.704137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.465 [2024-11-27 10:03:10.704145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.465 [2024-11-27 10:03:10.704151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.465 [2024-11-27 10:03:10.716913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.465 [2024-11-27 10:03:10.717422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-11-27 10:03:10.717440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.465 [2024-11-27 10:03:10.717448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.465 [2024-11-27 10:03:10.717665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.465 [2024-11-27 10:03:10.717886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.465 [2024-11-27 10:03:10.717895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.465 [2024-11-27 10:03:10.717902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.465 [2024-11-27 10:03:10.717909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.465 [2024-11-27 10:03:10.730665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.465 [2024-11-27 10:03:10.731256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-11-27 10:03:10.731299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.465 [2024-11-27 10:03:10.731312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.465 [2024-11-27 10:03:10.731554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.465 [2024-11-27 10:03:10.731776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.465 [2024-11-27 10:03:10.731786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.465 [2024-11-27 10:03:10.731794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.465 [2024-11-27 10:03:10.731802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.465 [2024-11-27 10:03:10.744569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.465 [2024-11-27 10:03:10.745185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-11-27 10:03:10.745241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.465 [2024-11-27 10:03:10.745253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.465 [2024-11-27 10:03:10.745494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.465 [2024-11-27 10:03:10.745715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.465 [2024-11-27 10:03:10.745723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.465 [2024-11-27 10:03:10.745731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.465 [2024-11-27 10:03:10.745739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.465 [2024-11-27 10:03:10.758306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.465 [2024-11-27 10:03:10.758848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-11-27 10:03:10.758895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.465 [2024-11-27 10:03:10.758906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.465 [2024-11-27 10:03:10.759148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.465 [2024-11-27 10:03:10.759381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.465 [2024-11-27 10:03:10.759392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.465 [2024-11-27 10:03:10.759405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.465 [2024-11-27 10:03:10.759414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.465 [2024-11-27 10:03:10.772204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.465 [2024-11-27 10:03:10.772826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-11-27 10:03:10.772876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.465 [2024-11-27 10:03:10.772888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.465 [2024-11-27 10:03:10.773132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.465 [2024-11-27 10:03:10.773366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.465 [2024-11-27 10:03:10.773377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.465 [2024-11-27 10:03:10.773385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.465 [2024-11-27 10:03:10.773394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.465 [2024-11-27 10:03:10.785972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.465 [2024-11-27 10:03:10.786641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-11-27 10:03:10.786693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.465 [2024-11-27 10:03:10.786705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.465 [2024-11-27 10:03:10.786950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.465 [2024-11-27 10:03:10.787183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.465 [2024-11-27 10:03:10.787194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.465 [2024-11-27 10:03:10.787203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.465 [2024-11-27 10:03:10.787211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.465 [2024-11-27 10:03:10.799781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.465 [2024-11-27 10:03:10.800476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-11-27 10:03:10.800531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.465 [2024-11-27 10:03:10.800543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.465 [2024-11-27 10:03:10.800789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.465 [2024-11-27 10:03:10.801013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.465 [2024-11-27 10:03:10.801022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.465 [2024-11-27 10:03:10.801031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.465 [2024-11-27 10:03:10.801041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.465 [2024-11-27 10:03:10.813643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.465 [2024-11-27 10:03:10.814278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-11-27 10:03:10.814341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.465 [2024-11-27 10:03:10.814356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.465 [2024-11-27 10:03:10.814610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.466 [2024-11-27 10:03:10.814834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.466 [2024-11-27 10:03:10.814844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.466 [2024-11-27 10:03:10.814853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.466 [2024-11-27 10:03:10.814863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.466 [2024-11-27 10:03:10.827453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.466 [2024-11-27 10:03:10.828124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-11-27 10:03:10.828197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.466 [2024-11-27 10:03:10.828211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.466 [2024-11-27 10:03:10.828463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.466 [2024-11-27 10:03:10.828687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.466 [2024-11-27 10:03:10.828696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.466 [2024-11-27 10:03:10.828704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.466 [2024-11-27 10:03:10.828713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.466 [2024-11-27 10:03:10.841318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.466 [2024-11-27 10:03:10.842004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-11-27 10:03:10.842067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.466 [2024-11-27 10:03:10.842080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.466 [2024-11-27 10:03:10.842343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.466 [2024-11-27 10:03:10.842569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.466 [2024-11-27 10:03:10.842579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.466 [2024-11-27 10:03:10.842587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.466 [2024-11-27 10:03:10.842597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.466 [2024-11-27 10:03:10.855189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.466 [2024-11-27 10:03:10.855908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-11-27 10:03:10.855971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.466 [2024-11-27 10:03:10.855991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.466 [2024-11-27 10:03:10.856255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.466 [2024-11-27 10:03:10.856481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.466 [2024-11-27 10:03:10.856491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.466 [2024-11-27 10:03:10.856499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.466 [2024-11-27 10:03:10.856508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.466 [2024-11-27 10:03:10.869084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.466 [2024-11-27 10:03:10.869687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-11-27 10:03:10.869716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.466 [2024-11-27 10:03:10.869724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.466 [2024-11-27 10:03:10.869944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.466 [2024-11-27 10:03:10.870168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.466 [2024-11-27 10:03:10.870179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.466 [2024-11-27 10:03:10.870187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.466 [2024-11-27 10:03:10.870194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.466 [2024-11-27 10:03:10.882999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.466 [2024-11-27 10:03:10.883523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-11-27 10:03:10.883551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.466 [2024-11-27 10:03:10.883560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.466 [2024-11-27 10:03:10.883780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.466 [2024-11-27 10:03:10.883999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.466 [2024-11-27 10:03:10.884007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.466 [2024-11-27 10:03:10.884015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.466 [2024-11-27 10:03:10.884023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.466 [2024-11-27 10:03:10.896855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.466 [2024-11-27 10:03:10.897443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-11-27 10:03:10.897470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.466 [2024-11-27 10:03:10.897479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.466 [2024-11-27 10:03:10.897697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.466 [2024-11-27 10:03:10.897922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.466 [2024-11-27 10:03:10.897933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.466 [2024-11-27 10:03:10.897941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.466 [2024-11-27 10:03:10.897949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.466 [2024-11-27 10:03:10.910763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.466 [2024-11-27 10:03:10.911323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-11-27 10:03:10.911387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.466 [2024-11-27 10:03:10.911404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.466 [2024-11-27 10:03:10.911657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.466 [2024-11-27 10:03:10.911882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.466 [2024-11-27 10:03:10.911894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.466 [2024-11-27 10:03:10.911902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.466 [2024-11-27 10:03:10.911912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.466 [2024-11-27 10:03:10.924538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.466 [2024-11-27 10:03:10.925086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-11-27 10:03:10.925116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.466 [2024-11-27 10:03:10.925126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.466 [2024-11-27 10:03:10.925354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.466 [2024-11-27 10:03:10.925574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.466 [2024-11-27 10:03:10.925584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.466 [2024-11-27 10:03:10.925591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.466 [2024-11-27 10:03:10.925600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.729 [2024-11-27 10:03:10.938400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.729 [2024-11-27 10:03:10.938879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.729 [2024-11-27 10:03:10.938902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.729 [2024-11-27 10:03:10.938910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.729 [2024-11-27 10:03:10.939128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.729 [2024-11-27 10:03:10.939357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.729 [2024-11-27 10:03:10.939368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.729 [2024-11-27 10:03:10.939384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.729 [2024-11-27 10:03:10.939392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.729 [2024-11-27 10:03:10.952175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.729 [2024-11-27 10:03:10.952712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.729 [2024-11-27 10:03:10.952734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.729 [2024-11-27 10:03:10.952743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.729 [2024-11-27 10:03:10.952960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.729 [2024-11-27 10:03:10.953188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.729 [2024-11-27 10:03:10.953198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.729 [2024-11-27 10:03:10.953205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.729 [2024-11-27 10:03:10.953213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.729 [2024-11-27 10:03:10.966012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.729 [2024-11-27 10:03:10.966670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.729 [2024-11-27 10:03:10.966733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.729 [2024-11-27 10:03:10.966746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.729 [2024-11-27 10:03:10.966999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.729 [2024-11-27 10:03:10.967235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.729 [2024-11-27 10:03:10.967246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.729 [2024-11-27 10:03:10.967254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.729 [2024-11-27 10:03:10.967263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.729 [2024-11-27 10:03:10.979865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.729 [2024-11-27 10:03:10.980569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.729 [2024-11-27 10:03:10.980633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.729 [2024-11-27 10:03:10.980646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.729 [2024-11-27 10:03:10.980897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.729 [2024-11-27 10:03:10.981122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.729 [2024-11-27 10:03:10.981132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.729 [2024-11-27 10:03:10.981141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.729 [2024-11-27 10:03:10.981150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.729 9415.33 IOPS, 36.78 MiB/s [2024-11-27T09:03:11.195Z] [2024-11-27 10:03:10.995435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.729 [2024-11-27 10:03:10.996096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.729 [2024-11-27 10:03:10.996170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.729 [2024-11-27 10:03:10.996183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.729 [2024-11-27 10:03:10.996436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.729 [2024-11-27 10:03:10.996660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.729 [2024-11-27 10:03:10.996670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.729 [2024-11-27 10:03:10.996679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.729 [2024-11-27 10:03:10.996688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.729 [2024-11-27 10:03:11.009237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.729 [2024-11-27 10:03:11.009942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.729 [2024-11-27 10:03:11.010005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.729 [2024-11-27 10:03:11.010017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.729 [2024-11-27 10:03:11.010283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.729 [2024-11-27 10:03:11.010510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.729 [2024-11-27 10:03:11.010521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.729 [2024-11-27 10:03:11.010529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.729 [2024-11-27 10:03:11.010539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.729 [2024-11-27 10:03:11.023140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.729 [2024-11-27 10:03:11.023696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.729 [2024-11-27 10:03:11.023724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.729 [2024-11-27 10:03:11.023734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.729 [2024-11-27 10:03:11.023954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.729 [2024-11-27 10:03:11.024180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.729 [2024-11-27 10:03:11.024191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.729 [2024-11-27 10:03:11.024200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.729 [2024-11-27 10:03:11.024207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.729 [2024-11-27 10:03:11.036997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.729 [2024-11-27 10:03:11.037543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.730 [2024-11-27 10:03:11.037582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.730 [2024-11-27 10:03:11.037591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.730 [2024-11-27 10:03:11.037810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.730 [2024-11-27 10:03:11.038028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.730 [2024-11-27 10:03:11.038038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.730 [2024-11-27 10:03:11.038045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.730 [2024-11-27 10:03:11.038053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.730 [2024-11-27 10:03:11.050863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.730 [2024-11-27 10:03:11.051413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.730 [2024-11-27 10:03:11.051439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.730 [2024-11-27 10:03:11.051449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.730 [2024-11-27 10:03:11.051668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.730 [2024-11-27 10:03:11.051886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.730 [2024-11-27 10:03:11.051896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.730 [2024-11-27 10:03:11.051904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.730 [2024-11-27 10:03:11.051913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.730 [2024-11-27 10:03:11.064708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.730 [2024-11-27 10:03:11.065267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.730 [2024-11-27 10:03:11.065292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.730 [2024-11-27 10:03:11.065300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.730 [2024-11-27 10:03:11.065519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.730 [2024-11-27 10:03:11.065737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.730 [2024-11-27 10:03:11.065746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.730 [2024-11-27 10:03:11.065754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.730 [2024-11-27 10:03:11.065762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.730 [2024-11-27 10:03:11.078582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.730 [2024-11-27 10:03:11.079152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.730 [2024-11-27 10:03:11.079185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.730 [2024-11-27 10:03:11.079194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.730 [2024-11-27 10:03:11.079412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.730 [2024-11-27 10:03:11.079638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.730 [2024-11-27 10:03:11.079647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.730 [2024-11-27 10:03:11.079656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.730 [2024-11-27 10:03:11.079664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.730 [2024-11-27 10:03:11.092476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.730 [2024-11-27 10:03:11.093046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.730 [2024-11-27 10:03:11.093070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.730 [2024-11-27 10:03:11.093080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.730 [2024-11-27 10:03:11.093307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.730 [2024-11-27 10:03:11.093526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.730 [2024-11-27 10:03:11.093544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.730 [2024-11-27 10:03:11.093552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.730 [2024-11-27 10:03:11.093562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.730 [2024-11-27 10:03:11.106349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.730 [2024-11-27 10:03:11.106869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.730 [2024-11-27 10:03:11.106896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.730 [2024-11-27 10:03:11.106906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.730 [2024-11-27 10:03:11.107125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.730 [2024-11-27 10:03:11.107352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.730 [2024-11-27 10:03:11.107365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.730 [2024-11-27 10:03:11.107373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.730 [2024-11-27 10:03:11.107381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.730 [2024-11-27 10:03:11.120181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.730 [2024-11-27 10:03:11.120712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.730 [2024-11-27 10:03:11.120736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.730 [2024-11-27 10:03:11.120744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.730 [2024-11-27 10:03:11.120961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.730 [2024-11-27 10:03:11.121188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.730 [2024-11-27 10:03:11.121197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.730 [2024-11-27 10:03:11.121212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.730 [2024-11-27 10:03:11.121220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.730 [2024-11-27 10:03:11.134004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.730 [2024-11-27 10:03:11.134583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.730 [2024-11-27 10:03:11.134606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.730 [2024-11-27 10:03:11.134615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.730 [2024-11-27 10:03:11.134832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.730 [2024-11-27 10:03:11.135051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.730 [2024-11-27 10:03:11.135063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.730 [2024-11-27 10:03:11.135073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.730 [2024-11-27 10:03:11.135081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.730 [2024-11-27 10:03:11.147883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.730 [2024-11-27 10:03:11.148416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.730 [2024-11-27 10:03:11.148440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.730 [2024-11-27 10:03:11.148449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.730 [2024-11-27 10:03:11.148667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.731 [2024-11-27 10:03:11.148885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.731 [2024-11-27 10:03:11.148894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.731 [2024-11-27 10:03:11.148902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.731 [2024-11-27 10:03:11.148910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.731 [2024-11-27 10:03:11.161708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.731 [2024-11-27 10:03:11.162237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.731 [2024-11-27 10:03:11.162260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.731 [2024-11-27 10:03:11.162269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.731 [2024-11-27 10:03:11.162487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.731 [2024-11-27 10:03:11.162705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.731 [2024-11-27 10:03:11.162715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.731 [2024-11-27 10:03:11.162723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.731 [2024-11-27 10:03:11.162731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.731 [2024-11-27 10:03:11.175580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.731 [2024-11-27 10:03:11.176082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.731 [2024-11-27 10:03:11.176106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.731 [2024-11-27 10:03:11.176114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.731 [2024-11-27 10:03:11.176339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.731 [2024-11-27 10:03:11.176558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.731 [2024-11-27 10:03:11.176567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.731 [2024-11-27 10:03:11.176575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.731 [2024-11-27 10:03:11.176582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.731 [2024-11-27 10:03:11.189402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.731 [2024-11-27 10:03:11.189936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.731 [2024-11-27 10:03:11.189959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.731 [2024-11-27 10:03:11.189968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.731 [2024-11-27 10:03:11.190192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.731 [2024-11-27 10:03:11.190416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.731 [2024-11-27 10:03:11.190426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.731 [2024-11-27 10:03:11.190436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.731 [2024-11-27 10:03:11.190444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.993 [2024-11-27 10:03:11.203264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.993 [2024-11-27 10:03:11.203920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.993 [2024-11-27 10:03:11.203983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.993 [2024-11-27 10:03:11.203996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.993 [2024-11-27 10:03:11.204259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.993 [2024-11-27 10:03:11.204486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.993 [2024-11-27 10:03:11.204495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.993 [2024-11-27 10:03:11.204505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.993 [2024-11-27 10:03:11.204514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.993 [2024-11-27 10:03:11.217113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.993 [2024-11-27 10:03:11.217778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.993 [2024-11-27 10:03:11.217849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.993 [2024-11-27 10:03:11.217863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.993 [2024-11-27 10:03:11.218114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.993 [2024-11-27 10:03:11.218355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.993 [2024-11-27 10:03:11.218366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.993 [2024-11-27 10:03:11.218375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.993 [2024-11-27 10:03:11.218384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.993 [2024-11-27 10:03:11.230991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.993 [2024-11-27 10:03:11.231685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.993 [2024-11-27 10:03:11.231749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.993 [2024-11-27 10:03:11.231762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.993 [2024-11-27 10:03:11.232014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.993 [2024-11-27 10:03:11.232255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.993 [2024-11-27 10:03:11.232265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.993 [2024-11-27 10:03:11.232273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.993 [2024-11-27 10:03:11.232282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.993 [2024-11-27 10:03:11.244784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.993 [2024-11-27 10:03:11.245408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.993 [2024-11-27 10:03:11.245439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.993 [2024-11-27 10:03:11.245448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.993 [2024-11-27 10:03:11.245668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.993 [2024-11-27 10:03:11.245886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.993 [2024-11-27 10:03:11.245897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.993 [2024-11-27 10:03:11.245904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.994 [2024-11-27 10:03:11.245912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.994 [2024-11-27 10:03:11.258725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.994 [2024-11-27 10:03:11.259304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.994 [2024-11-27 10:03:11.259368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.994 [2024-11-27 10:03:11.259382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.994 [2024-11-27 10:03:11.259635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.994 [2024-11-27 10:03:11.259868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.994 [2024-11-27 10:03:11.259880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.994 [2024-11-27 10:03:11.259889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.994 [2024-11-27 10:03:11.259898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.994 [2024-11-27 10:03:11.272518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.994 [2024-11-27 10:03:11.273070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.994 [2024-11-27 10:03:11.273097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.994 [2024-11-27 10:03:11.273106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.994 [2024-11-27 10:03:11.273353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.994 [2024-11-27 10:03:11.273573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.994 [2024-11-27 10:03:11.273583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.994 [2024-11-27 10:03:11.273591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.994 [2024-11-27 10:03:11.273599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.994 [2024-11-27 10:03:11.286407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.994 [2024-11-27 10:03:11.286977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.994 [2024-11-27 10:03:11.287001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.994 [2024-11-27 10:03:11.287010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.994 [2024-11-27 10:03:11.287236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.994 [2024-11-27 10:03:11.287468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.994 [2024-11-27 10:03:11.287479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.994 [2024-11-27 10:03:11.287486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.994 [2024-11-27 10:03:11.287494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.994 [2024-11-27 10:03:11.300302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.994 [2024-11-27 10:03:11.300972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.994 [2024-11-27 10:03:11.301035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.994 [2024-11-27 10:03:11.301048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.994 [2024-11-27 10:03:11.301312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.994 [2024-11-27 10:03:11.301537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.994 [2024-11-27 10:03:11.301547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.994 [2024-11-27 10:03:11.301562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.994 [2024-11-27 10:03:11.301571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.994 [2024-11-27 10:03:11.314184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.994 [2024-11-27 10:03:11.314804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.994 [2024-11-27 10:03:11.314833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.994 [2024-11-27 10:03:11.314842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.994 [2024-11-27 10:03:11.315061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.994 [2024-11-27 10:03:11.315290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.994 [2024-11-27 10:03:11.315302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.994 [2024-11-27 10:03:11.315310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.994 [2024-11-27 10:03:11.315317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.994 [2024-11-27 10:03:11.328116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.994 [2024-11-27 10:03:11.328732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.994 [2024-11-27 10:03:11.328797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.994 [2024-11-27 10:03:11.328810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.994 [2024-11-27 10:03:11.329062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.994 [2024-11-27 10:03:11.329300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.994 [2024-11-27 10:03:11.329312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.994 [2024-11-27 10:03:11.329321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.994 [2024-11-27 10:03:11.329330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.994 [2024-11-27 10:03:11.341929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.994 [2024-11-27 10:03:11.342516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.994 [2024-11-27 10:03:11.342551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.994 [2024-11-27 10:03:11.342560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.994 [2024-11-27 10:03:11.342783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.994 [2024-11-27 10:03:11.343002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.994 [2024-11-27 10:03:11.343012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.994 [2024-11-27 10:03:11.343020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.994 [2024-11-27 10:03:11.343028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.994 [2024-11-27 10:03:11.355729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.994 [2024-11-27 10:03:11.356304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.994 [2024-11-27 10:03:11.356367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.994 [2024-11-27 10:03:11.356382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.994 [2024-11-27 10:03:11.356636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.994 [2024-11-27 10:03:11.356861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.994 [2024-11-27 10:03:11.356871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.994 [2024-11-27 10:03:11.356880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.994 [2024-11-27 10:03:11.356889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.994 [2024-11-27 10:03:11.369500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.994 [2024-11-27 10:03:11.370064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.994 [2024-11-27 10:03:11.370123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.994 [2024-11-27 10:03:11.370135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.994 [2024-11-27 10:03:11.370397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.994 [2024-11-27 10:03:11.370623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.994 [2024-11-27 10:03:11.370633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.994 [2024-11-27 10:03:11.370642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.994 [2024-11-27 10:03:11.370651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.994 [2024-11-27 10:03:11.383262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.994 [2024-11-27 10:03:11.383809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.994 [2024-11-27 10:03:11.383835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.994 [2024-11-27 10:03:11.383843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.994 [2024-11-27 10:03:11.384061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.994 [2024-11-27 10:03:11.384288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.994 [2024-11-27 10:03:11.384298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.994 [2024-11-27 10:03:11.384307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.995 [2024-11-27 10:03:11.384314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.995 [2024-11-27 10:03:11.397149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.995 [2024-11-27 10:03:11.397712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.995 [2024-11-27 10:03:11.397734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.995 [2024-11-27 10:03:11.397749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.995 [2024-11-27 10:03:11.397967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.995 [2024-11-27 10:03:11.398192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.995 [2024-11-27 10:03:11.398202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.995 [2024-11-27 10:03:11.398210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.995 [2024-11-27 10:03:11.398218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.995 [2024-11-27 10:03:11.411001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.995 [2024-11-27 10:03:11.411640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.995 [2024-11-27 10:03:11.411692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.995 [2024-11-27 10:03:11.411704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.995 [2024-11-27 10:03:11.411949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.995 [2024-11-27 10:03:11.412183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.995 [2024-11-27 10:03:11.412193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.995 [2024-11-27 10:03:11.412201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.995 [2024-11-27 10:03:11.412210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.995 [2024-11-27 10:03:11.424788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.995 [2024-11-27 10:03:11.425371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.995 [2024-11-27 10:03:11.425423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.995 [2024-11-27 10:03:11.425435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.995 [2024-11-27 10:03:11.425680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.995 [2024-11-27 10:03:11.425903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.995 [2024-11-27 10:03:11.425912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.995 [2024-11-27 10:03:11.425920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.995 [2024-11-27 10:03:11.425929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.995 [2024-11-27 10:03:11.438738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.995 [2024-11-27 10:03:11.439240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.995 [2024-11-27 10:03:11.439265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.995 [2024-11-27 10:03:11.439273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.995 [2024-11-27 10:03:11.439491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.995 [2024-11-27 10:03:11.439715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.995 [2024-11-27 10:03:11.439724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.995 [2024-11-27 10:03:11.439731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.995 [2024-11-27 10:03:11.439739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:55.995 [2024-11-27 10:03:11.452515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:55.995 [2024-11-27 10:03:11.453022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.995 [2024-11-27 10:03:11.453042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:55.995 [2024-11-27 10:03:11.453050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:55.995 [2024-11-27 10:03:11.453273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:55.995 [2024-11-27 10:03:11.453490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:55.995 [2024-11-27 10:03:11.453499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:55.995 [2024-11-27 10:03:11.453506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:55.995 [2024-11-27 10:03:11.453513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.267 [2024-11-27 10:03:11.466287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.267 [2024-11-27 10:03:11.466884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.267 [2024-11-27 10:03:11.466929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.267 [2024-11-27 10:03:11.466942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.267 [2024-11-27 10:03:11.467193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.267 [2024-11-27 10:03:11.467415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.267 [2024-11-27 10:03:11.467425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.267 [2024-11-27 10:03:11.467434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.267 [2024-11-27 10:03:11.467443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.267 [2024-11-27 10:03:11.480028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.267 [2024-11-27 10:03:11.480706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.267 [2024-11-27 10:03:11.480751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.267 [2024-11-27 10:03:11.480763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.267 [2024-11-27 10:03:11.481002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.267 [2024-11-27 10:03:11.481233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.267 [2024-11-27 10:03:11.481244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.267 [2024-11-27 10:03:11.481257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.267 [2024-11-27 10:03:11.481266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.267 [2024-11-27 10:03:11.493847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.267 [2024-11-27 10:03:11.494537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.267 [2024-11-27 10:03:11.494581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.267 [2024-11-27 10:03:11.494594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.267 [2024-11-27 10:03:11.494837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.267 [2024-11-27 10:03:11.495058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.267 [2024-11-27 10:03:11.495068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.267 [2024-11-27 10:03:11.495076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.267 [2024-11-27 10:03:11.495084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.267 [2024-11-27 10:03:11.507654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.267 [2024-11-27 10:03:11.508289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.267 [2024-11-27 10:03:11.508332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.267 [2024-11-27 10:03:11.508345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.267 [2024-11-27 10:03:11.508587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.267 [2024-11-27 10:03:11.508808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.267 [2024-11-27 10:03:11.508818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.267 [2024-11-27 10:03:11.508826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.267 [2024-11-27 10:03:11.508834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.267 [2024-11-27 10:03:11.521409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.267 [2024-11-27 10:03:11.522006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.267 [2024-11-27 10:03:11.522048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.267 [2024-11-27 10:03:11.522059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.267 [2024-11-27 10:03:11.522306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.267 [2024-11-27 10:03:11.522528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.267 [2024-11-27 10:03:11.522538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.267 [2024-11-27 10:03:11.522546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.267 [2024-11-27 10:03:11.522554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.267 [2024-11-27 10:03:11.535323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.267 [2024-11-27 10:03:11.535821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.267 [2024-11-27 10:03:11.535841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.267 [2024-11-27 10:03:11.535849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.267 [2024-11-27 10:03:11.536065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.267 [2024-11-27 10:03:11.536287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.267 [2024-11-27 10:03:11.536297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.267 [2024-11-27 10:03:11.536305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.267 [2024-11-27 10:03:11.536312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.267 [2024-11-27 10:03:11.549068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.267 [2024-11-27 10:03:11.549669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.267 [2024-11-27 10:03:11.549710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.267 [2024-11-27 10:03:11.549721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.267 [2024-11-27 10:03:11.549959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.267 [2024-11-27 10:03:11.550188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.267 [2024-11-27 10:03:11.550198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.267 [2024-11-27 10:03:11.550206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.267 [2024-11-27 10:03:11.550214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.267 [2024-11-27 10:03:11.562964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.267 [2024-11-27 10:03:11.563593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.267 [2024-11-27 10:03:11.563613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.267 [2024-11-27 10:03:11.563621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.267 [2024-11-27 10:03:11.563837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.267 [2024-11-27 10:03:11.564053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.267 [2024-11-27 10:03:11.564062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.267 [2024-11-27 10:03:11.564069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.267 [2024-11-27 10:03:11.564076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.267 [2024-11-27 10:03:11.576847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.267 [2024-11-27 10:03:11.577496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.267 [2024-11-27 10:03:11.577535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.267 [2024-11-27 10:03:11.577551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.267 [2024-11-27 10:03:11.577788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.267 [2024-11-27 10:03:11.578008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.267 [2024-11-27 10:03:11.578018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.268 [2024-11-27 10:03:11.578026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.268 [2024-11-27 10:03:11.578034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.268 [2024-11-27 10:03:11.590598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.268 [2024-11-27 10:03:11.591140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.268 [2024-11-27 10:03:11.591165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.268 [2024-11-27 10:03:11.591174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.268 [2024-11-27 10:03:11.591390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.268 [2024-11-27 10:03:11.591607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.268 [2024-11-27 10:03:11.591621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.268 [2024-11-27 10:03:11.591629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.268 [2024-11-27 10:03:11.591636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.268 [2024-11-27 10:03:11.604392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.268 [2024-11-27 10:03:11.605040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.268 [2024-11-27 10:03:11.605078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.268 [2024-11-27 10:03:11.605089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.268 [2024-11-27 10:03:11.605333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.268 [2024-11-27 10:03:11.605554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.268 [2024-11-27 10:03:11.605563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.268 [2024-11-27 10:03:11.605571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.268 [2024-11-27 10:03:11.605578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.268 [2024-11-27 10:03:11.618131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.268 [2024-11-27 10:03:11.618728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.268 [2024-11-27 10:03:11.618766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.268 [2024-11-27 10:03:11.618777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.268 [2024-11-27 10:03:11.619013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.268 [2024-11-27 10:03:11.619247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.268 [2024-11-27 10:03:11.619257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.268 [2024-11-27 10:03:11.619265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.268 [2024-11-27 10:03:11.619272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.268 [2024-11-27 10:03:11.632037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.268 [2024-11-27 10:03:11.632654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.268 [2024-11-27 10:03:11.632691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.268 [2024-11-27 10:03:11.632703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.268 [2024-11-27 10:03:11.632938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.268 [2024-11-27 10:03:11.633169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.268 [2024-11-27 10:03:11.633179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.268 [2024-11-27 10:03:11.633187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.268 [2024-11-27 10:03:11.633195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.268 [2024-11-27 10:03:11.645980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.268 [2024-11-27 10:03:11.646523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.268 [2024-11-27 10:03:11.646542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.268 [2024-11-27 10:03:11.646551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.268 [2024-11-27 10:03:11.646767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.268 [2024-11-27 10:03:11.646983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.268 [2024-11-27 10:03:11.646992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.268 [2024-11-27 10:03:11.647000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.268 [2024-11-27 10:03:11.647008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.268 [2024-11-27 10:03:11.659862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.268 [2024-11-27 10:03:11.660516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.268 [2024-11-27 10:03:11.660554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.268 [2024-11-27 10:03:11.660566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.268 [2024-11-27 10:03:11.660805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.268 [2024-11-27 10:03:11.661025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.268 [2024-11-27 10:03:11.661035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.268 [2024-11-27 10:03:11.661048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.268 [2024-11-27 10:03:11.661056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.268 [2024-11-27 10:03:11.673610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.268 [2024-11-27 10:03:11.674144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.268 [2024-11-27 10:03:11.674177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.268 [2024-11-27 10:03:11.674186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.268 [2024-11-27 10:03:11.674402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.268 [2024-11-27 10:03:11.674619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.268 [2024-11-27 10:03:11.674627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.268 [2024-11-27 10:03:11.674634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.268 [2024-11-27 10:03:11.674641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.268 [2024-11-27 10:03:11.687403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.268 [2024-11-27 10:03:11.687797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.268 [2024-11-27 10:03:11.687816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.268 [2024-11-27 10:03:11.687824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.268 [2024-11-27 10:03:11.688040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.268 [2024-11-27 10:03:11.688262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.268 [2024-11-27 10:03:11.688272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.268 [2024-11-27 10:03:11.688279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.268 [2024-11-27 10:03:11.688286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.268 [2024-11-27 10:03:11.701251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.268 [2024-11-27 10:03:11.701737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.268 [2024-11-27 10:03:11.701754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.268 [2024-11-27 10:03:11.701762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.268 [2024-11-27 10:03:11.701977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.268 [2024-11-27 10:03:11.702198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.268 [2024-11-27 10:03:11.702207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.268 [2024-11-27 10:03:11.702214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.268 [2024-11-27 10:03:11.702220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.268 [2024-11-27 10:03:11.715016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.268 [2024-11-27 10:03:11.715605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.268 [2024-11-27 10:03:11.715645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.268 [2024-11-27 10:03:11.715656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.268 [2024-11-27 10:03:11.715893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.268 [2024-11-27 10:03:11.716113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.269 [2024-11-27 10:03:11.716123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.269 [2024-11-27 10:03:11.716131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.269 [2024-11-27 10:03:11.716139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.269 [2024-11-27 10:03:11.727699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.269 [2024-11-27 10:03:11.728062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.269 [2024-11-27 10:03:11.728077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.269 [2024-11-27 10:03:11.728083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.269 [2024-11-27 10:03:11.728237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.269 [2024-11-27 10:03:11.728386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.269 [2024-11-27 10:03:11.728392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.269 [2024-11-27 10:03:11.728397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.269 [2024-11-27 10:03:11.728402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.531 [2024-11-27 10:03:11.740399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.531 [2024-11-27 10:03:11.740871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.531 [2024-11-27 10:03:11.740884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.531 [2024-11-27 10:03:11.740889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.531 [2024-11-27 10:03:11.741038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.531 [2024-11-27 10:03:11.741191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.531 [2024-11-27 10:03:11.741197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.531 [2024-11-27 10:03:11.741202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.531 [2024-11-27 10:03:11.741208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.531 [2024-11-27 10:03:11.753050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.531 [2024-11-27 10:03:11.753689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.531 [2024-11-27 10:03:11.753720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.531 [2024-11-27 10:03:11.753732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.531 [2024-11-27 10:03:11.753897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.531 [2024-11-27 10:03:11.754049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.531 [2024-11-27 10:03:11.754056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.531 [2024-11-27 10:03:11.754062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.531 [2024-11-27 10:03:11.754067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.531 [2024-11-27 10:03:11.765642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.531 [2024-11-27 10:03:11.766077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.531 [2024-11-27 10:03:11.766107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.531 [2024-11-27 10:03:11.766116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.531 [2024-11-27 10:03:11.766289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.531 [2024-11-27 10:03:11.766441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.531 [2024-11-27 10:03:11.766448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.531 [2024-11-27 10:03:11.766454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.531 [2024-11-27 10:03:11.766459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.531 [2024-11-27 10:03:11.778311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.531 [2024-11-27 10:03:11.778871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.531 [2024-11-27 10:03:11.778901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.531 [2024-11-27 10:03:11.778909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.531 [2024-11-27 10:03:11.779074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.531 [2024-11-27 10:03:11.779232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.531 [2024-11-27 10:03:11.779239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.531 [2024-11-27 10:03:11.779245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.531 [2024-11-27 10:03:11.779251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.531 [2024-11-27 10:03:11.790958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.531 [2024-11-27 10:03:11.791465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.531 [2024-11-27 10:03:11.791495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.531 [2024-11-27 10:03:11.791504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.531 [2024-11-27 10:03:11.791671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.531 [2024-11-27 10:03:11.791826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.531 [2024-11-27 10:03:11.791833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.531 [2024-11-27 10:03:11.791838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.531 [2024-11-27 10:03:11.791844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.531 [2024-11-27 10:03:11.803567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.531 [2024-11-27 10:03:11.803906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.531 [2024-11-27 10:03:11.803920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.531 [2024-11-27 10:03:11.803926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.531 [2024-11-27 10:03:11.804075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.531 [2024-11-27 10:03:11.804228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.531 [2024-11-27 10:03:11.804235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.531 [2024-11-27 10:03:11.804240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.531 [2024-11-27 10:03:11.804245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.531 [2024-11-27 10:03:11.816238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.531 [2024-11-27 10:03:11.816703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.531 [2024-11-27 10:03:11.816716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.531 [2024-11-27 10:03:11.816721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.531 [2024-11-27 10:03:11.816869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.531 [2024-11-27 10:03:11.817018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.531 [2024-11-27 10:03:11.817023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.531 [2024-11-27 10:03:11.817028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.531 [2024-11-27 10:03:11.817032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.531 [2024-11-27 10:03:11.828883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.531 [2024-11-27 10:03:11.829478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.531 [2024-11-27 10:03:11.829509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.531 [2024-11-27 10:03:11.829517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.531 [2024-11-27 10:03:11.829681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.531 [2024-11-27 10:03:11.829833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.531 [2024-11-27 10:03:11.829840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.531 [2024-11-27 10:03:11.829849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.531 [2024-11-27 10:03:11.829854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.531 [2024-11-27 10:03:11.841473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.532 [2024-11-27 10:03:11.842033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.532 [2024-11-27 10:03:11.842062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.532 [2024-11-27 10:03:11.842071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.532 [2024-11-27 10:03:11.842245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.532 [2024-11-27 10:03:11.842397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.532 [2024-11-27 10:03:11.842403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.532 [2024-11-27 10:03:11.842409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.532 [2024-11-27 10:03:11.842415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.532 [2024-11-27 10:03:11.854127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.532 [2024-11-27 10:03:11.854676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.532 [2024-11-27 10:03:11.854706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.532 [2024-11-27 10:03:11.854715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.532 [2024-11-27 10:03:11.854880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.532 [2024-11-27 10:03:11.855031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.532 [2024-11-27 10:03:11.855038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.532 [2024-11-27 10:03:11.855044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.532 [2024-11-27 10:03:11.855049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.532 [2024-11-27 10:03:11.866754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.532 [2024-11-27 10:03:11.867082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.532 [2024-11-27 10:03:11.867099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.532 [2024-11-27 10:03:11.867105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.532 [2024-11-27 10:03:11.867263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.532 [2024-11-27 10:03:11.867413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.532 [2024-11-27 10:03:11.867419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.532 [2024-11-27 10:03:11.867424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.532 [2024-11-27 10:03:11.867429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.532 [2024-11-27 10:03:11.879413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.532 [2024-11-27 10:03:11.879864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.532 [2024-11-27 10:03:11.879878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.532 [2024-11-27 10:03:11.879883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.532 [2024-11-27 10:03:11.880031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.532 [2024-11-27 10:03:11.880186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.532 [2024-11-27 10:03:11.880192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.532 [2024-11-27 10:03:11.880197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.532 [2024-11-27 10:03:11.880202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.532 [2024-11-27 10:03:11.892045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.532 [2024-11-27 10:03:11.892614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.532 [2024-11-27 10:03:11.892644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.532 [2024-11-27 10:03:11.892652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.532 [2024-11-27 10:03:11.892819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.532 [2024-11-27 10:03:11.892971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.532 [2024-11-27 10:03:11.892978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.532 [2024-11-27 10:03:11.892983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.532 [2024-11-27 10:03:11.892989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.532 [2024-11-27 10:03:11.904689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.532 [2024-11-27 10:03:11.905127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.532 [2024-11-27 10:03:11.905142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.532 [2024-11-27 10:03:11.905147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.532 [2024-11-27 10:03:11.905302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.532 [2024-11-27 10:03:11.905451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.532 [2024-11-27 10:03:11.905457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.532 [2024-11-27 10:03:11.905463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.532 [2024-11-27 10:03:11.905468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.532 [2024-11-27 10:03:11.917323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.532 [2024-11-27 10:03:11.917634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.532 [2024-11-27 10:03:11.917649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.532 [2024-11-27 10:03:11.917660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.532 [2024-11-27 10:03:11.917809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.532 [2024-11-27 10:03:11.917959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.532 [2024-11-27 10:03:11.917965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.532 [2024-11-27 10:03:11.917971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.532 [2024-11-27 10:03:11.917976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.532 [2024-11-27 10:03:11.929971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.532 [2024-11-27 10:03:11.930418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.532 [2024-11-27 10:03:11.930432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.532 [2024-11-27 10:03:11.930437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.532 [2024-11-27 10:03:11.930585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.532 [2024-11-27 10:03:11.930734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.532 [2024-11-27 10:03:11.930740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.532 [2024-11-27 10:03:11.930745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.532 [2024-11-27 10:03:11.930749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.532 [2024-11-27 10:03:11.942589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.532 [2024-11-27 10:03:11.943032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.532 [2024-11-27 10:03:11.943044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.532 [2024-11-27 10:03:11.943049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.532 [2024-11-27 10:03:11.943201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.532 [2024-11-27 10:03:11.943350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.532 [2024-11-27 10:03:11.943357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.532 [2024-11-27 10:03:11.943362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.532 [2024-11-27 10:03:11.943366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.532 [2024-11-27 10:03:11.955202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.532 [2024-11-27 10:03:11.955651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.532 [2024-11-27 10:03:11.955662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.532 [2024-11-27 10:03:11.955668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.532 [2024-11-27 10:03:11.955815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.532 [2024-11-27 10:03:11.955967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.532 [2024-11-27 10:03:11.955973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.533 [2024-11-27 10:03:11.955978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.533 [2024-11-27 10:03:11.955983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.533 [2024-11-27 10:03:11.967818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.533 [2024-11-27 10:03:11.968384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.533 [2024-11-27 10:03:11.968414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.533 [2024-11-27 10:03:11.968423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.533 [2024-11-27 10:03:11.968587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.533 [2024-11-27 10:03:11.968739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.533 [2024-11-27 10:03:11.968745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.533 [2024-11-27 10:03:11.968751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.533 [2024-11-27 10:03:11.968756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.533 [2024-11-27 10:03:11.980465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.533 [2024-11-27 10:03:11.980994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.533 [2024-11-27 10:03:11.981025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.533 [2024-11-27 10:03:11.981034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.533 [2024-11-27 10:03:11.981205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.533 [2024-11-27 10:03:11.981358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.533 [2024-11-27 10:03:11.981364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.533 [2024-11-27 10:03:11.981370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.533 [2024-11-27 10:03:11.981376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.533 [2024-11-27 10:03:11.993083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.533 [2024-11-27 10:03:11.993629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.533 [2024-11-27 10:03:11.993660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.533 [2024-11-27 10:03:11.993668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.533 [2024-11-27 10:03:11.993832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.533 [2024-11-27 10:03:11.993984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.533 [2024-11-27 10:03:11.993991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.533 [2024-11-27 10:03:11.994000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.533 [2024-11-27 10:03:11.994006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.795 7061.50 IOPS, 27.58 MiB/s [2024-11-27T09:03:12.261Z] [2024-11-27 10:03:12.005722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.795 [2024-11-27 10:03:12.006295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.795 [2024-11-27 10:03:12.006326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.795 [2024-11-27 10:03:12.006335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.795 [2024-11-27 10:03:12.006502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.795 [2024-11-27 10:03:12.006654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.795 [2024-11-27 10:03:12.006661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.795 [2024-11-27 10:03:12.006666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.795 [2024-11-27 10:03:12.006671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.795 [2024-11-27 10:03:12.018385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.795 [2024-11-27 10:03:12.018931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.795 [2024-11-27 10:03:12.018961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.795 [2024-11-27 10:03:12.018970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.795 [2024-11-27 10:03:12.019134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.795 [2024-11-27 10:03:12.019294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.795 [2024-11-27 10:03:12.019301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.795 [2024-11-27 10:03:12.019307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.795 [2024-11-27 10:03:12.019313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.795 [2024-11-27 10:03:12.031008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.795 [2024-11-27 10:03:12.031557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.795 [2024-11-27 10:03:12.031588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.795 [2024-11-27 10:03:12.031597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.795 [2024-11-27 10:03:12.031761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.795 [2024-11-27 10:03:12.031913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.795 [2024-11-27 10:03:12.031920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.795 [2024-11-27 10:03:12.031925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.795 [2024-11-27 10:03:12.031931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.795 [2024-11-27 10:03:12.043635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.795 [2024-11-27 10:03:12.044172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.795 [2024-11-27 10:03:12.044202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.795 [2024-11-27 10:03:12.044211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.795 [2024-11-27 10:03:12.044376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.795 [2024-11-27 10:03:12.044528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.795 [2024-11-27 10:03:12.044534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.795 [2024-11-27 10:03:12.044539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.795 [2024-11-27 10:03:12.044545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.795 [2024-11-27 10:03:12.056249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.795 [2024-11-27 10:03:12.056795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.795 [2024-11-27 10:03:12.056825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.795 [2024-11-27 10:03:12.056834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.795 [2024-11-27 10:03:12.056999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.795 [2024-11-27 10:03:12.057151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.795 [2024-11-27 10:03:12.057165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.795 [2024-11-27 10:03:12.057171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.795 [2024-11-27 10:03:12.057176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.795 [2024-11-27 10:03:12.068874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.795 [2024-11-27 10:03:12.069323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.795 [2024-11-27 10:03:12.069338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.795 [2024-11-27 10:03:12.069344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.795 [2024-11-27 10:03:12.069493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.795 [2024-11-27 10:03:12.069642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.795 [2024-11-27 10:03:12.069647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.795 [2024-11-27 10:03:12.069653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.795 [2024-11-27 10:03:12.069657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.795 [2024-11-27 10:03:12.081525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.795 [2024-11-27 10:03:12.081974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.795 [2024-11-27 10:03:12.081991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.795 [2024-11-27 10:03:12.081996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.795 [2024-11-27 10:03:12.082145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.795 [2024-11-27 10:03:12.082299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.795 [2024-11-27 10:03:12.082305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.795 [2024-11-27 10:03:12.082310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.795 [2024-11-27 10:03:12.082315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.795 [2024-11-27 10:03:12.094154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.795 [2024-11-27 10:03:12.094633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.795 [2024-11-27 10:03:12.094646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.796 [2024-11-27 10:03:12.094651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.796 [2024-11-27 10:03:12.094800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.796 [2024-11-27 10:03:12.094948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.796 [2024-11-27 10:03:12.094954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.796 [2024-11-27 10:03:12.094959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.796 [2024-11-27 10:03:12.094964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.796 [2024-11-27 10:03:12.106800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.796 [2024-11-27 10:03:12.107401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.796 [2024-11-27 10:03:12.107432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.796 [2024-11-27 10:03:12.107440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.796 [2024-11-27 10:03:12.107605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.796 [2024-11-27 10:03:12.107757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.796 [2024-11-27 10:03:12.107763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.796 [2024-11-27 10:03:12.107768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.796 [2024-11-27 10:03:12.107774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.796 [2024-11-27 10:03:12.119476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.796 [2024-11-27 10:03:12.120021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.796 [2024-11-27 10:03:12.120052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.796 [2024-11-27 10:03:12.120060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.796 [2024-11-27 10:03:12.120236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.796 [2024-11-27 10:03:12.120389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.796 [2024-11-27 10:03:12.120395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.796 [2024-11-27 10:03:12.120401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.796 [2024-11-27 10:03:12.120406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.796 [2024-11-27 10:03:12.132109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.796 [2024-11-27 10:03:12.132663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.796 [2024-11-27 10:03:12.132693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.796 [2024-11-27 10:03:12.132701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.796 [2024-11-27 10:03:12.132866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.796 [2024-11-27 10:03:12.133018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.796 [2024-11-27 10:03:12.133025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.796 [2024-11-27 10:03:12.133030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.796 [2024-11-27 10:03:12.133035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.796 [2024-11-27 10:03:12.144751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.796 [2024-11-27 10:03:12.145300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.796 [2024-11-27 10:03:12.145329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.796 [2024-11-27 10:03:12.145338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.796 [2024-11-27 10:03:12.145502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.796 [2024-11-27 10:03:12.145654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.796 [2024-11-27 10:03:12.145660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.796 [2024-11-27 10:03:12.145666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.796 [2024-11-27 10:03:12.145672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.796 [2024-11-27 10:03:12.157379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.796 [2024-11-27 10:03:12.157938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.796 [2024-11-27 10:03:12.157969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.796 [2024-11-27 10:03:12.157978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.796 [2024-11-27 10:03:12.158144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.796 [2024-11-27 10:03:12.158303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.796 [2024-11-27 10:03:12.158311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.796 [2024-11-27 10:03:12.158320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.796 [2024-11-27 10:03:12.158326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.796 [2024-11-27 10:03:12.170033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.796 [2024-11-27 10:03:12.170592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.796 [2024-11-27 10:03:12.170623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.796 [2024-11-27 10:03:12.170631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.796 [2024-11-27 10:03:12.170796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.796 [2024-11-27 10:03:12.170948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.796 [2024-11-27 10:03:12.170955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.796 [2024-11-27 10:03:12.170960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.796 [2024-11-27 10:03:12.170966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.796 [2024-11-27 10:03:12.182696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.796 [2024-11-27 10:03:12.183164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.796 [2024-11-27 10:03:12.183180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.796 [2024-11-27 10:03:12.183185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.796 [2024-11-27 10:03:12.183334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.796 [2024-11-27 10:03:12.183483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.796 [2024-11-27 10:03:12.183488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.796 [2024-11-27 10:03:12.183493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.796 [2024-11-27 10:03:12.183498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.796 [2024-11-27 10:03:12.195359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.796 [2024-11-27 10:03:12.195827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.796 [2024-11-27 10:03:12.195840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.797 [2024-11-27 10:03:12.195846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.797 [2024-11-27 10:03:12.195994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.797 [2024-11-27 10:03:12.196143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.797 [2024-11-27 10:03:12.196149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.797 [2024-11-27 10:03:12.196154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.797 [2024-11-27 10:03:12.196164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.797 [2024-11-27 10:03:12.208010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.797 [2024-11-27 10:03:12.208487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.797 [2024-11-27 10:03:12.208500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.797 [2024-11-27 10:03:12.208506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.797 [2024-11-27 10:03:12.208653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.797 [2024-11-27 10:03:12.208802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.797 [2024-11-27 10:03:12.208808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.797 [2024-11-27 10:03:12.208813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.797 [2024-11-27 10:03:12.208818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.797 [2024-11-27 10:03:12.220673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.797 [2024-11-27 10:03:12.221209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.797 [2024-11-27 10:03:12.221239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.797 [2024-11-27 10:03:12.221248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.797 [2024-11-27 10:03:12.221415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.797 [2024-11-27 10:03:12.221567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.797 [2024-11-27 10:03:12.221574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.797 [2024-11-27 10:03:12.221579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.797 [2024-11-27 10:03:12.221585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.797 [2024-11-27 10:03:12.233307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.797 [2024-11-27 10:03:12.233821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.797 [2024-11-27 10:03:12.233852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.797 [2024-11-27 10:03:12.233861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.797 [2024-11-27 10:03:12.234026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.797 [2024-11-27 10:03:12.234186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.797 [2024-11-27 10:03:12.234195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.797 [2024-11-27 10:03:12.234200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.797 [2024-11-27 10:03:12.234205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.797 [2024-11-27 10:03:12.245929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.797 [2024-11-27 10:03:12.246539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.797 [2024-11-27 10:03:12.246574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.797 [2024-11-27 10:03:12.246582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.797 [2024-11-27 10:03:12.246747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.797 [2024-11-27 10:03:12.246898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.797 [2024-11-27 10:03:12.246906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.797 [2024-11-27 10:03:12.246911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.797 [2024-11-27 10:03:12.246917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:56.797 [2024-11-27 10:03:12.258617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:56.797 [2024-11-27 10:03:12.259075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.797 [2024-11-27 10:03:12.259090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:56.797 [2024-11-27 10:03:12.259095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:56.797 [2024-11-27 10:03:12.259248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:56.797 [2024-11-27 10:03:12.259398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:56.797 [2024-11-27 10:03:12.259404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:56.797 [2024-11-27 10:03:12.259409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:56.797 [2024-11-27 10:03:12.259413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.059 [2024-11-27 10:03:12.271253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.059 [2024-11-27 10:03:12.271787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.059 [2024-11-27 10:03:12.271817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.059 [2024-11-27 10:03:12.271826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.059 [2024-11-27 10:03:12.271990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.059 [2024-11-27 10:03:12.272142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.059 [2024-11-27 10:03:12.272148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.059 [2024-11-27 10:03:12.272154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.059 [2024-11-27 10:03:12.272165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.059 [2024-11-27 10:03:12.283872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.059 [2024-11-27 10:03:12.284405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.059 [2024-11-27 10:03:12.284420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.059 [2024-11-27 10:03:12.284426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.059 [2024-11-27 10:03:12.284579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.059 [2024-11-27 10:03:12.284728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.059 [2024-11-27 10:03:12.284734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.059 [2024-11-27 10:03:12.284739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.059 [2024-11-27 10:03:12.284744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.059 [2024-11-27 10:03:12.296458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.059 [2024-11-27 10:03:12.296998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.059 [2024-11-27 10:03:12.297028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.059 [2024-11-27 10:03:12.297036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.059 [2024-11-27 10:03:12.297208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.059 [2024-11-27 10:03:12.297361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.059 [2024-11-27 10:03:12.297368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.059 [2024-11-27 10:03:12.297373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.059 [2024-11-27 10:03:12.297379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.059 [2024-11-27 10:03:12.309081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.059 [2024-11-27 10:03:12.309636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.059 [2024-11-27 10:03:12.309666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.059 [2024-11-27 10:03:12.309675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.059 [2024-11-27 10:03:12.309839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.059 [2024-11-27 10:03:12.309991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.059 [2024-11-27 10:03:12.309997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.059 [2024-11-27 10:03:12.310003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.059 [2024-11-27 10:03:12.310008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.059 [2024-11-27 10:03:12.321709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.059 [2024-11-27 10:03:12.322317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.059 [2024-11-27 10:03:12.322347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.060 [2024-11-27 10:03:12.322356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.060 [2024-11-27 10:03:12.322520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.060 [2024-11-27 10:03:12.322672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.060 [2024-11-27 10:03:12.322678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.060 [2024-11-27 10:03:12.322691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.060 [2024-11-27 10:03:12.322697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.060 [2024-11-27 10:03:12.334398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.060 [2024-11-27 10:03:12.334945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.060 [2024-11-27 10:03:12.334975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.060 [2024-11-27 10:03:12.334984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.060 [2024-11-27 10:03:12.335148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.060 [2024-11-27 10:03:12.335307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.060 [2024-11-27 10:03:12.335315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.060 [2024-11-27 10:03:12.335320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.060 [2024-11-27 10:03:12.335326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.060 [2024-11-27 10:03:12.347045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.060 [2024-11-27 10:03:12.347591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.060 [2024-11-27 10:03:12.347621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.060 [2024-11-27 10:03:12.347629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.060 [2024-11-27 10:03:12.347794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.060 [2024-11-27 10:03:12.347946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.060 [2024-11-27 10:03:12.347952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.060 [2024-11-27 10:03:12.347958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.060 [2024-11-27 10:03:12.347963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.060 [2024-11-27 10:03:12.359682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.060 [2024-11-27 10:03:12.360207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.060 [2024-11-27 10:03:12.360237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.060 [2024-11-27 10:03:12.360246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.060 [2024-11-27 10:03:12.360410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.060 [2024-11-27 10:03:12.360562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.060 [2024-11-27 10:03:12.360569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.060 [2024-11-27 10:03:12.360574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.060 [2024-11-27 10:03:12.360579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.060 [2024-11-27 10:03:12.372286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.060 [2024-11-27 10:03:12.372831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.060 [2024-11-27 10:03:12.372860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.060 [2024-11-27 10:03:12.372869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.060 [2024-11-27 10:03:12.373033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.060 [2024-11-27 10:03:12.373192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.060 [2024-11-27 10:03:12.373200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.060 [2024-11-27 10:03:12.373206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.060 [2024-11-27 10:03:12.373211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.060 [2024-11-27 10:03:12.384915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.060 [2024-11-27 10:03:12.385325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.060 [2024-11-27 10:03:12.385340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.060 [2024-11-27 10:03:12.385346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.060 [2024-11-27 10:03:12.385495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.060 [2024-11-27 10:03:12.385644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.060 [2024-11-27 10:03:12.385650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.060 [2024-11-27 10:03:12.385655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.060 [2024-11-27 10:03:12.385660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.060 [2024-11-27 10:03:12.397505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.060 [2024-11-27 10:03:12.397850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.060 [2024-11-27 10:03:12.397862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.060 [2024-11-27 10:03:12.397868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.060 [2024-11-27 10:03:12.398016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.060 [2024-11-27 10:03:12.398169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.060 [2024-11-27 10:03:12.398175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.060 [2024-11-27 10:03:12.398180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.060 [2024-11-27 10:03:12.398185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.060 [2024-11-27 10:03:12.410151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.060 [2024-11-27 10:03:12.410577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.060 [2024-11-27 10:03:12.410594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.060 [2024-11-27 10:03:12.410599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.060 [2024-11-27 10:03:12.410748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.060 [2024-11-27 10:03:12.410897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.060 [2024-11-27 10:03:12.410902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.060 [2024-11-27 10:03:12.410908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.060 [2024-11-27 10:03:12.410913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.060 [2024-11-27 10:03:12.422762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.060 [2024-11-27 10:03:12.423378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.060 [2024-11-27 10:03:12.423408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.060 [2024-11-27 10:03:12.423417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.061 [2024-11-27 10:03:12.423582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.061 [2024-11-27 10:03:12.423734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.061 [2024-11-27 10:03:12.423741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.061 [2024-11-27 10:03:12.423746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.061 [2024-11-27 10:03:12.423752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.061 [2024-11-27 10:03:12.435457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.061 [2024-11-27 10:03:12.435985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.061 [2024-11-27 10:03:12.436015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.061 [2024-11-27 10:03:12.436024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.061 [2024-11-27 10:03:12.436196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.061 [2024-11-27 10:03:12.436348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.061 [2024-11-27 10:03:12.436355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.061 [2024-11-27 10:03:12.436361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.061 [2024-11-27 10:03:12.436367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.061 [2024-11-27 10:03:12.448062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.061 [2024-11-27 10:03:12.448624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.061 [2024-11-27 10:03:12.448654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.061 [2024-11-27 10:03:12.448662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.061 [2024-11-27 10:03:12.448832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.061 [2024-11-27 10:03:12.448984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.061 [2024-11-27 10:03:12.448990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.061 [2024-11-27 10:03:12.448995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.061 [2024-11-27 10:03:12.449001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.061 [2024-11-27 10:03:12.460702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.061 [2024-11-27 10:03:12.461261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.061 [2024-11-27 10:03:12.461292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.061 [2024-11-27 10:03:12.461300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.061 [2024-11-27 10:03:12.461465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.061 [2024-11-27 10:03:12.461617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.061 [2024-11-27 10:03:12.461624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.061 [2024-11-27 10:03:12.461629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.061 [2024-11-27 10:03:12.461635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.061 [2024-11-27 10:03:12.473344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.061 [2024-11-27 10:03:12.473890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.061 [2024-11-27 10:03:12.473920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.061 [2024-11-27 10:03:12.473928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.061 [2024-11-27 10:03:12.474093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.061 [2024-11-27 10:03:12.474252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.061 [2024-11-27 10:03:12.474259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.061 [2024-11-27 10:03:12.474264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.061 [2024-11-27 10:03:12.474270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.061 [2024-11-27 10:03:12.485973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.061 [2024-11-27 10:03:12.486526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.061 [2024-11-27 10:03:12.486556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.061 [2024-11-27 10:03:12.486565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.061 [2024-11-27 10:03:12.486731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.061 [2024-11-27 10:03:12.486883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.061 [2024-11-27 10:03:12.486890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.061 [2024-11-27 10:03:12.486899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.061 [2024-11-27 10:03:12.486905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.061 [2024-11-27 10:03:12.498623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.061 [2024-11-27 10:03:12.499192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.061 [2024-11-27 10:03:12.499222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.061 [2024-11-27 10:03:12.499230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.061 [2024-11-27 10:03:12.499395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.061 [2024-11-27 10:03:12.499547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.061 [2024-11-27 10:03:12.499553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.061 [2024-11-27 10:03:12.499559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.061 [2024-11-27 10:03:12.499565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.061 [2024-11-27 10:03:12.511263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.061 [2024-11-27 10:03:12.511806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.061 [2024-11-27 10:03:12.511837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.061 [2024-11-27 10:03:12.511845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.061 [2024-11-27 10:03:12.512010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.061 [2024-11-27 10:03:12.512170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.061 [2024-11-27 10:03:12.512177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.061 [2024-11-27 10:03:12.512182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.061 [2024-11-27 10:03:12.512188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.323 [2024-11-27 10:03:12.523886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.323 [2024-11-27 10:03:12.524331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.323 [2024-11-27 10:03:12.524347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.323 [2024-11-27 10:03:12.524352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.323 [2024-11-27 10:03:12.524502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.323 [2024-11-27 10:03:12.524650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.323 [2024-11-27 10:03:12.524656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.323 [2024-11-27 10:03:12.524661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.323 [2024-11-27 10:03:12.524666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.323 [2024-11-27 10:03:12.536512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.323 [2024-11-27 10:03:12.536959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.323 [2024-11-27 10:03:12.536971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.323 [2024-11-27 10:03:12.536976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.323 [2024-11-27 10:03:12.537124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.323 [2024-11-27 10:03:12.537278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.323 [2024-11-27 10:03:12.537284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.323 [2024-11-27 10:03:12.537289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.323 [2024-11-27 10:03:12.537293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.323 [2024-11-27 10:03:12.549136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.323 [2024-11-27 10:03:12.549583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.323 [2024-11-27 10:03:12.549596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.323 [2024-11-27 10:03:12.549601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.323 [2024-11-27 10:03:12.549749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.323 [2024-11-27 10:03:12.549897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.323 [2024-11-27 10:03:12.549903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.323 [2024-11-27 10:03:12.549908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.323 [2024-11-27 10:03:12.549912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.323 [2024-11-27 10:03:12.561752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.324 [2024-11-27 10:03:12.562153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.324 [2024-11-27 10:03:12.562168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.324 [2024-11-27 10:03:12.562174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.324 [2024-11-27 10:03:12.562322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.324 [2024-11-27 10:03:12.562470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.324 [2024-11-27 10:03:12.562476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.324 [2024-11-27 10:03:12.562481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.324 [2024-11-27 10:03:12.562486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.324 [2024-11-27 10:03:12.574321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.324 [2024-11-27 10:03:12.574808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.324 [2024-11-27 10:03:12.574841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.324 [2024-11-27 10:03:12.574850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.324 [2024-11-27 10:03:12.575014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.324 [2024-11-27 10:03:12.575179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.324 [2024-11-27 10:03:12.575187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.324 [2024-11-27 10:03:12.575192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.324 [2024-11-27 10:03:12.575198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.324 [2024-11-27 10:03:12.586895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.324 [2024-11-27 10:03:12.587309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.324 [2024-11-27 10:03:12.587339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.324 [2024-11-27 10:03:12.587348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.324 [2024-11-27 10:03:12.587515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.324 [2024-11-27 10:03:12.587667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.324 [2024-11-27 10:03:12.587673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.324 [2024-11-27 10:03:12.587679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.324 [2024-11-27 10:03:12.587684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.324 [2024-11-27 10:03:12.599533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.324 [2024-11-27 10:03:12.600055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.324 [2024-11-27 10:03:12.600085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.324 [2024-11-27 10:03:12.600094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.324 [2024-11-27 10:03:12.600266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.324 [2024-11-27 10:03:12.600419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.324 [2024-11-27 10:03:12.600426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.324 [2024-11-27 10:03:12.600431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.324 [2024-11-27 10:03:12.600436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.324 [2024-11-27 10:03:12.612149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.324 [2024-11-27 10:03:12.612673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.324 [2024-11-27 10:03:12.612703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.324 [2024-11-27 10:03:12.612712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.324 [2024-11-27 10:03:12.612876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.324 [2024-11-27 10:03:12.613032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.324 [2024-11-27 10:03:12.613039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.324 [2024-11-27 10:03:12.613044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.324 [2024-11-27 10:03:12.613049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.324 [2024-11-27 10:03:12.624765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.324 [2024-11-27 10:03:12.625372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.324 [2024-11-27 10:03:12.625402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.324 [2024-11-27 10:03:12.625411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.324 [2024-11-27 10:03:12.625575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.324 [2024-11-27 10:03:12.625727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.324 [2024-11-27 10:03:12.625733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.324 [2024-11-27 10:03:12.625739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.324 [2024-11-27 10:03:12.625745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.324 [2024-11-27 10:03:12.637450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.324 [2024-11-27 10:03:12.637909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.324 [2024-11-27 10:03:12.637923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.324 [2024-11-27 10:03:12.637929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.324 [2024-11-27 10:03:12.638078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.324 [2024-11-27 10:03:12.638232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.324 [2024-11-27 10:03:12.638238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.324 [2024-11-27 10:03:12.638243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.324 [2024-11-27 10:03:12.638248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.324 [2024-11-27 10:03:12.650086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.324 [2024-11-27 10:03:12.650511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.324 [2024-11-27 10:03:12.650524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.324 [2024-11-27 10:03:12.650530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.324 [2024-11-27 10:03:12.650678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.324 [2024-11-27 10:03:12.650827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.324 [2024-11-27 10:03:12.650832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.324 [2024-11-27 10:03:12.650841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.324 [2024-11-27 10:03:12.650846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.324 [2024-11-27 10:03:12.662684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.324 [2024-11-27 10:03:12.663120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.324 [2024-11-27 10:03:12.663133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.324 [2024-11-27 10:03:12.663138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.324 [2024-11-27 10:03:12.663290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.324 [2024-11-27 10:03:12.663440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.324 [2024-11-27 10:03:12.663446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.324 [2024-11-27 10:03:12.663451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.324 [2024-11-27 10:03:12.663456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.324 [2024-11-27 10:03:12.675387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.324 [2024-11-27 10:03:12.675793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.324 [2024-11-27 10:03:12.675806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.324 [2024-11-27 10:03:12.675812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.324 [2024-11-27 10:03:12.675961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.324 [2024-11-27 10:03:12.676109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.324 [2024-11-27 10:03:12.676116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.324 [2024-11-27 10:03:12.676121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.324 [2024-11-27 10:03:12.676126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.325 [2024-11-27 10:03:12.688041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.325 [2024-11-27 10:03:12.688583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.325 [2024-11-27 10:03:12.688613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.325 [2024-11-27 10:03:12.688622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.325 [2024-11-27 10:03:12.688787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.325 [2024-11-27 10:03:12.688938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.325 [2024-11-27 10:03:12.688945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.325 [2024-11-27 10:03:12.688950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.325 [2024-11-27 10:03:12.688956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.325 [2024-11-27 10:03:12.700675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.325 [2024-11-27 10:03:12.701226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.325 [2024-11-27 10:03:12.701256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.325 [2024-11-27 10:03:12.701265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.325 [2024-11-27 10:03:12.701432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.325 [2024-11-27 10:03:12.701584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.325 [2024-11-27 10:03:12.701591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.325 [2024-11-27 10:03:12.701596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.325 [2024-11-27 10:03:12.701602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.325 [2024-11-27 10:03:12.713306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.325 [2024-11-27 10:03:12.713846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.325 [2024-11-27 10:03:12.713876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.325 [2024-11-27 10:03:12.713884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.325 [2024-11-27 10:03:12.714049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.325 [2024-11-27 10:03:12.714209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.325 [2024-11-27 10:03:12.714217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.325 [2024-11-27 10:03:12.714222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.325 [2024-11-27 10:03:12.714228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.325 [2024-11-27 10:03:12.725918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.325 [2024-11-27 10:03:12.726488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.325 [2024-11-27 10:03:12.726518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.325 [2024-11-27 10:03:12.726527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.325 [2024-11-27 10:03:12.726691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.325 [2024-11-27 10:03:12.726843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.325 [2024-11-27 10:03:12.726850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.325 [2024-11-27 10:03:12.726855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.325 [2024-11-27 10:03:12.726861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.325 [2024-11-27 10:03:12.738561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.325 [2024-11-27 10:03:12.739033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.325 [2024-11-27 10:03:12.739063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.325 [2024-11-27 10:03:12.739075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.325 [2024-11-27 10:03:12.739247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.325 [2024-11-27 10:03:12.739400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.325 [2024-11-27 10:03:12.739407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.325 [2024-11-27 10:03:12.739412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.325 [2024-11-27 10:03:12.739418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.325 [2024-11-27 10:03:12.751253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.325 [2024-11-27 10:03:12.751821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.325 [2024-11-27 10:03:12.751851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.325 [2024-11-27 10:03:12.751860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.325 [2024-11-27 10:03:12.752024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.325 [2024-11-27 10:03:12.752184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.325 [2024-11-27 10:03:12.752191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.325 [2024-11-27 10:03:12.752196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.325 [2024-11-27 10:03:12.752202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.325 [2024-11-27 10:03:12.763898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.325 [2024-11-27 10:03:12.764362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.325 [2024-11-27 10:03:12.764392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.325 [2024-11-27 10:03:12.764401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.325 [2024-11-27 10:03:12.764565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.325 [2024-11-27 10:03:12.764717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.325 [2024-11-27 10:03:12.764723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.325 [2024-11-27 10:03:12.764729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.325 [2024-11-27 10:03:12.764734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.325 [2024-11-27 10:03:12.776579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.325 [2024-11-27 10:03:12.777132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.325 [2024-11-27 10:03:12.777169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.325 [2024-11-27 10:03:12.777177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.325 [2024-11-27 10:03:12.777342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.325 [2024-11-27 10:03:12.777498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.325 [2024-11-27 10:03:12.777504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.325 [2024-11-27 10:03:12.777510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.325 [2024-11-27 10:03:12.777515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.588 [2024-11-27 10:03:12.789227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.588 [2024-11-27 10:03:12.789753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.588 [2024-11-27 10:03:12.789783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.588 [2024-11-27 10:03:12.789792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.588 [2024-11-27 10:03:12.789957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.588 [2024-11-27 10:03:12.790108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.588 [2024-11-27 10:03:12.790115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.588 [2024-11-27 10:03:12.790120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.588 [2024-11-27 10:03:12.790126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.588 [2024-11-27 10:03:12.801838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.588 [2024-11-27 10:03:12.802374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.588 [2024-11-27 10:03:12.802404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.588 [2024-11-27 10:03:12.802412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.588 [2024-11-27 10:03:12.802577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.588 [2024-11-27 10:03:12.802728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.588 [2024-11-27 10:03:12.802735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.588 [2024-11-27 10:03:12.802741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.588 [2024-11-27 10:03:12.802747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.588 [2024-11-27 10:03:12.814444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.588 [2024-11-27 10:03:12.814924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.588 [2024-11-27 10:03:12.814954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.588 [2024-11-27 10:03:12.814963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.588 [2024-11-27 10:03:12.815127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.588 [2024-11-27 10:03:12.815287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.589 [2024-11-27 10:03:12.815295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.589 [2024-11-27 10:03:12.815304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.589 [2024-11-27 10:03:12.815310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.589 [2024-11-27 10:03:12.827145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.589 [2024-11-27 10:03:12.827699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.589 [2024-11-27 10:03:12.827729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.589 [2024-11-27 10:03:12.827738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.589 [2024-11-27 10:03:12.827902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.589 [2024-11-27 10:03:12.828054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.589 [2024-11-27 10:03:12.828061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.589 [2024-11-27 10:03:12.828066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.589 [2024-11-27 10:03:12.828072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.589 [2024-11-27 10:03:12.839774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.589 [2024-11-27 10:03:12.840216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.589 [2024-11-27 10:03:12.840232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.589 [2024-11-27 10:03:12.840237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.589 [2024-11-27 10:03:12.840387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.589 [2024-11-27 10:03:12.840535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.589 [2024-11-27 10:03:12.840541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.589 [2024-11-27 10:03:12.840546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.589 [2024-11-27 10:03:12.840551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.589 [2024-11-27 10:03:12.852393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.589 [2024-11-27 10:03:12.852960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.589 [2024-11-27 10:03:12.852990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.589 [2024-11-27 10:03:12.852999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.589 [2024-11-27 10:03:12.853169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.589 [2024-11-27 10:03:12.853322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.589 [2024-11-27 10:03:12.853329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.589 [2024-11-27 10:03:12.853334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.589 [2024-11-27 10:03:12.853340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.589 [2024-11-27 10:03:12.865060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.589 [2024-11-27 10:03:12.865539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.589 [2024-11-27 10:03:12.865554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.589 [2024-11-27 10:03:12.865560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.589 [2024-11-27 10:03:12.865708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.589 [2024-11-27 10:03:12.865857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.589 [2024-11-27 10:03:12.865863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.589 [2024-11-27 10:03:12.865868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.589 [2024-11-27 10:03:12.865873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.589 [2024-11-27 10:03:12.877743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.589 [2024-11-27 10:03:12.878144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.589 [2024-11-27 10:03:12.878157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.589 [2024-11-27 10:03:12.878167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.589 [2024-11-27 10:03:12.878315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.589 [2024-11-27 10:03:12.878464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.589 [2024-11-27 10:03:12.878469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.589 [2024-11-27 10:03:12.878474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.589 [2024-11-27 10:03:12.878479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.589 [2024-11-27 10:03:12.890333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.589 [2024-11-27 10:03:12.890899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.589 [2024-11-27 10:03:12.890930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.589 [2024-11-27 10:03:12.890939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.589 [2024-11-27 10:03:12.891104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.589 [2024-11-27 10:03:12.891263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.589 [2024-11-27 10:03:12.891270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.589 [2024-11-27 10:03:12.891276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.589 [2024-11-27 10:03:12.891282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.589 [2024-11-27 10:03:12.903008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.589 [2024-11-27 10:03:12.903516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.589 [2024-11-27 10:03:12.903531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.589 [2024-11-27 10:03:12.903541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.589 [2024-11-27 10:03:12.903690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.589 [2024-11-27 10:03:12.903838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.589 [2024-11-27 10:03:12.903844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.589 [2024-11-27 10:03:12.903849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.589 [2024-11-27 10:03:12.903854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.589 [2024-11-27 10:03:12.915704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.589 [2024-11-27 10:03:12.916243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.589 [2024-11-27 10:03:12.916274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.589 [2024-11-27 10:03:12.916282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.589 [2024-11-27 10:03:12.916449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.589 [2024-11-27 10:03:12.916601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.589 [2024-11-27 10:03:12.916608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.589 [2024-11-27 10:03:12.916613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.589 [2024-11-27 10:03:12.916619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.589 [2024-11-27 10:03:12.928329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.589 [2024-11-27 10:03:12.928870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.589 [2024-11-27 10:03:12.928901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.589 [2024-11-27 10:03:12.928910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.589 [2024-11-27 10:03:12.929076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.589 [2024-11-27 10:03:12.929234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.589 [2024-11-27 10:03:12.929247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.589 [2024-11-27 10:03:12.929253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.589 [2024-11-27 10:03:12.929259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.589 [2024-11-27 10:03:12.940958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.589 [2024-11-27 10:03:12.941392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.589 [2024-11-27 10:03:12.941408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.590 [2024-11-27 10:03:12.941413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.590 [2024-11-27 10:03:12.941563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.590 [2024-11-27 10:03:12.941722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.590 [2024-11-27 10:03:12.941727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.590 [2024-11-27 10:03:12.941733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.590 [2024-11-27 10:03:12.941739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.590 [2024-11-27 10:03:12.953588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.590 [2024-11-27 10:03:12.953981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.590 [2024-11-27 10:03:12.953993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.590 [2024-11-27 10:03:12.953999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.590 [2024-11-27 10:03:12.954147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.590 [2024-11-27 10:03:12.954300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.590 [2024-11-27 10:03:12.954306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.590 [2024-11-27 10:03:12.954311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.590 [2024-11-27 10:03:12.954315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.590 [2024-11-27 10:03:12.966167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.590 [2024-11-27 10:03:12.966602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.590 [2024-11-27 10:03:12.966614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.590 [2024-11-27 10:03:12.966619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.590 [2024-11-27 10:03:12.966767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.590 [2024-11-27 10:03:12.966916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.590 [2024-11-27 10:03:12.966922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.590 [2024-11-27 10:03:12.966927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.590 [2024-11-27 10:03:12.966931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.590 [2024-11-27 10:03:12.978772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.590 [2024-11-27 10:03:12.979176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.590 [2024-11-27 10:03:12.979189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.590 [2024-11-27 10:03:12.979195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.590 [2024-11-27 10:03:12.979343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.590 [2024-11-27 10:03:12.979491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.590 [2024-11-27 10:03:12.979497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.590 [2024-11-27 10:03:12.979506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.590 [2024-11-27 10:03:12.979511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.590 [2024-11-27 10:03:12.991356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.590 [2024-11-27 10:03:12.991806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.590 [2024-11-27 10:03:12.991819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.590 [2024-11-27 10:03:12.991824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.590 [2024-11-27 10:03:12.991972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.590 [2024-11-27 10:03:12.992120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.590 [2024-11-27 10:03:12.992126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.590 [2024-11-27 10:03:12.992131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.590 [2024-11-27 10:03:12.992136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.590 5649.20 IOPS, 22.07 MiB/s [2024-11-27T09:03:13.056Z] [2024-11-27 10:03:13.003995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.590 [2024-11-27 10:03:13.004534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.590 [2024-11-27 10:03:13.004565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.590 [2024-11-27 10:03:13.004573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.590 [2024-11-27 10:03:13.004738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.590 [2024-11-27 10:03:13.004889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.590 [2024-11-27 10:03:13.004896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.590 [2024-11-27 10:03:13.004901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.590 [2024-11-27 10:03:13.004906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.590 [2024-11-27 10:03:13.016620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.590 [2024-11-27 10:03:13.017075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.590 [2024-11-27 10:03:13.017090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.590 [2024-11-27 10:03:13.017096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.590 [2024-11-27 10:03:13.017248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.590 [2024-11-27 10:03:13.017398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.590 [2024-11-27 10:03:13.017403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.590 [2024-11-27 10:03:13.017409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.590 [2024-11-27 10:03:13.017414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.590 [2024-11-27 10:03:13.029255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.590 [2024-11-27 10:03:13.029764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.590 [2024-11-27 10:03:13.029794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.590 [2024-11-27 10:03:13.029803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.590 [2024-11-27 10:03:13.029967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.590 [2024-11-27 10:03:13.030118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.590 [2024-11-27 10:03:13.030125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.590 [2024-11-27 10:03:13.030130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.590 [2024-11-27 10:03:13.030136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.590 [2024-11-27 10:03:13.041854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.590 [2024-11-27 10:03:13.042416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.590 [2024-11-27 10:03:13.042446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.590 [2024-11-27 10:03:13.042455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.590 [2024-11-27 10:03:13.042619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.590 [2024-11-27 10:03:13.042771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.590 [2024-11-27 10:03:13.042777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.590 [2024-11-27 10:03:13.042783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.590 [2024-11-27 10:03:13.042788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.853 [2024-11-27 10:03:13.054502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.853 [2024-11-27 10:03:13.054924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.853 [2024-11-27 10:03:13.054938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.853 [2024-11-27 10:03:13.054944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.853 [2024-11-27 10:03:13.055092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.853 [2024-11-27 10:03:13.055246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.853 [2024-11-27 10:03:13.055252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.853 [2024-11-27 10:03:13.055257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.853 [2024-11-27 10:03:13.055263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.853 [2024-11-27 10:03:13.067111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.853 [2024-11-27 10:03:13.067644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.853 [2024-11-27 10:03:13.067679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.853 [2024-11-27 10:03:13.067687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.853 [2024-11-27 10:03:13.067851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.853 [2024-11-27 10:03:13.068003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.853 [2024-11-27 10:03:13.068010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.853 [2024-11-27 10:03:13.068015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.853 [2024-11-27 10:03:13.068020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.853 [2024-11-27 10:03:13.079752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.853 [2024-11-27 10:03:13.080208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.853 [2024-11-27 10:03:13.080238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.853 [2024-11-27 10:03:13.080247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.853 [2024-11-27 10:03:13.080414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.853 [2024-11-27 10:03:13.080566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.853 [2024-11-27 10:03:13.080572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.853 [2024-11-27 10:03:13.080577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.853 [2024-11-27 10:03:13.080583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.853 [2024-11-27 10:03:13.092435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.853 [2024-11-27 10:03:13.092891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.853 [2024-11-27 10:03:13.092906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.853 [2024-11-27 10:03:13.092912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.853 [2024-11-27 10:03:13.093060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.853 [2024-11-27 10:03:13.093220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.853 [2024-11-27 10:03:13.093227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.853 [2024-11-27 10:03:13.093232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.853 [2024-11-27 10:03:13.093237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.853 [2024-11-27 10:03:13.105081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.853 [2024-11-27 10:03:13.105461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.853 [2024-11-27 10:03:13.105474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.853 [2024-11-27 10:03:13.105480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.853 [2024-11-27 10:03:13.105632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.853 [2024-11-27 10:03:13.105780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.853 [2024-11-27 10:03:13.105786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.853 [2024-11-27 10:03:13.105791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.853 [2024-11-27 10:03:13.105796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.853 [2024-11-27 10:03:13.117652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.853 [2024-11-27 10:03:13.118051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.853 [2024-11-27 10:03:13.118063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.853 [2024-11-27 10:03:13.118068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.853 [2024-11-27 10:03:13.118221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.853 [2024-11-27 10:03:13.118369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.853 [2024-11-27 10:03:13.118375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.853 [2024-11-27 10:03:13.118380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.853 [2024-11-27 10:03:13.118385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.853 [2024-11-27 10:03:13.130233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.853 [2024-11-27 10:03:13.130755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.853 [2024-11-27 10:03:13.130785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.853 [2024-11-27 10:03:13.130794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.853 [2024-11-27 10:03:13.130958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.853 [2024-11-27 10:03:13.131109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.853 [2024-11-27 10:03:13.131116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.853 [2024-11-27 10:03:13.131121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.853 [2024-11-27 10:03:13.131127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.853 [2024-11-27 10:03:13.142832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.853 [2024-11-27 10:03:13.143285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.854 [2024-11-27 10:03:13.143316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.854 [2024-11-27 10:03:13.143324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.854 [2024-11-27 10:03:13.143489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.854 [2024-11-27 10:03:13.143641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.854 [2024-11-27 10:03:13.143647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.854 [2024-11-27 10:03:13.143656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.854 [2024-11-27 10:03:13.143661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.854 [2024-11-27 10:03:13.155513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.854 [2024-11-27 10:03:13.155933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.854 [2024-11-27 10:03:13.155948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.854 [2024-11-27 10:03:13.155953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.854 [2024-11-27 10:03:13.156102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.854 [2024-11-27 10:03:13.156254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.854 [2024-11-27 10:03:13.156261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.854 [2024-11-27 10:03:13.156266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.854 [2024-11-27 10:03:13.156271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.854 [2024-11-27 10:03:13.168114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.854 [2024-11-27 10:03:13.168515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.854 [2024-11-27 10:03:13.168528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.854 [2024-11-27 10:03:13.168533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.854 [2024-11-27 10:03:13.168681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.854 [2024-11-27 10:03:13.168828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.854 [2024-11-27 10:03:13.168834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.854 [2024-11-27 10:03:13.168839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.854 [2024-11-27 10:03:13.168844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.854 [2024-11-27 10:03:13.180703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.854 [2024-11-27 10:03:13.181149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.854 [2024-11-27 10:03:13.181164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.854 [2024-11-27 10:03:13.181170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.854 [2024-11-27 10:03:13.181318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.854 [2024-11-27 10:03:13.181466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.854 [2024-11-27 10:03:13.181473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.854 [2024-11-27 10:03:13.181478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.854 [2024-11-27 10:03:13.181483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.854 [2024-11-27 10:03:13.193327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.854 [2024-11-27 10:03:13.193872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.854 [2024-11-27 10:03:13.193902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.854 [2024-11-27 10:03:13.193911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.854 [2024-11-27 10:03:13.194076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.854 [2024-11-27 10:03:13.194234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.854 [2024-11-27 10:03:13.194241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.854 [2024-11-27 10:03:13.194247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.854 [2024-11-27 10:03:13.194252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.854 [2024-11-27 10:03:13.205954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.854 [2024-11-27 10:03:13.206412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.854 [2024-11-27 10:03:13.206427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.854 [2024-11-27 10:03:13.206433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.854 [2024-11-27 10:03:13.206581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.854 [2024-11-27 10:03:13.206730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.854 [2024-11-27 10:03:13.206736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.854 [2024-11-27 10:03:13.206741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.854 [2024-11-27 10:03:13.206745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.854 [2024-11-27 10:03:13.218591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.854 [2024-11-27 10:03:13.218989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.854 [2024-11-27 10:03:13.219002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.854 [2024-11-27 10:03:13.219007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.854 [2024-11-27 10:03:13.219155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.854 [2024-11-27 10:03:13.219309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.854 [2024-11-27 10:03:13.219315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.854 [2024-11-27 10:03:13.219320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.854 [2024-11-27 10:03:13.219325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.854 [2024-11-27 10:03:13.231169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.854 [2024-11-27 10:03:13.231611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.854 [2024-11-27 10:03:13.231627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.854 [2024-11-27 10:03:13.231632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.854 [2024-11-27 10:03:13.231780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.854 [2024-11-27 10:03:13.231928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.854 [2024-11-27 10:03:13.231934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.854 [2024-11-27 10:03:13.231939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.854 [2024-11-27 10:03:13.231944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.854 [2024-11-27 10:03:13.243787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.854 [2024-11-27 10:03:13.244299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.854 [2024-11-27 10:03:13.244311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.854 [2024-11-27 10:03:13.244316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.854 [2024-11-27 10:03:13.244464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.854 [2024-11-27 10:03:13.244613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.854 [2024-11-27 10:03:13.244619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.854 [2024-11-27 10:03:13.244624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.854 [2024-11-27 10:03:13.244629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.854 [2024-11-27 10:03:13.256477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.854 [2024-11-27 10:03:13.256965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.854 [2024-11-27 10:03:13.256995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.854 [2024-11-27 10:03:13.257004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.854 [2024-11-27 10:03:13.257175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.854 [2024-11-27 10:03:13.257328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.854 [2024-11-27 10:03:13.257335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.854 [2024-11-27 10:03:13.257341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.854 [2024-11-27 10:03:13.257346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.854 [2024-11-27 10:03:13.269051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.855 [2024-11-27 10:03:13.269541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.855 [2024-11-27 10:03:13.269558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.855 [2024-11-27 10:03:13.269568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.855 [2024-11-27 10:03:13.269722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.855 [2024-11-27 10:03:13.269873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.855 [2024-11-27 10:03:13.269879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.855 [2024-11-27 10:03:13.269884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.855 [2024-11-27 10:03:13.269889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.855 [2024-11-27 10:03:13.281751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.855 [2024-11-27 10:03:13.282095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.855 [2024-11-27 10:03:13.282107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.855 [2024-11-27 10:03:13.282113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.855 [2024-11-27 10:03:13.282265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.855 [2024-11-27 10:03:13.282414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.855 [2024-11-27 10:03:13.282420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.855 [2024-11-27 10:03:13.282425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.855 [2024-11-27 10:03:13.282430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.855 [2024-11-27 10:03:13.294418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.855 [2024-11-27 10:03:13.294864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.855 [2024-11-27 10:03:13.294876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.855 [2024-11-27 10:03:13.294881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.855 [2024-11-27 10:03:13.295029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.855 [2024-11-27 10:03:13.295184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.855 [2024-11-27 10:03:13.295190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.855 [2024-11-27 10:03:13.295195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.855 [2024-11-27 10:03:13.295200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:57.855 [2024-11-27 10:03:13.307034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:57.855 [2024-11-27 10:03:13.307602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.855 [2024-11-27 10:03:13.307632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:57.855 [2024-11-27 10:03:13.307641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:57.855 [2024-11-27 10:03:13.307805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:57.855 [2024-11-27 10:03:13.307958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:57.855 [2024-11-27 10:03:13.307964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:57.855 [2024-11-27 10:03:13.307974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:57.855 [2024-11-27 10:03:13.307980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.118 [2024-11-27 10:03:13.319686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.118 [2024-11-27 10:03:13.320008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.118 [2024-11-27 10:03:13.320024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.118 [2024-11-27 10:03:13.320030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.118 [2024-11-27 10:03:13.320184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.118 [2024-11-27 10:03:13.320333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.118 [2024-11-27 10:03:13.320339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.118 [2024-11-27 10:03:13.320344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.119 [2024-11-27 10:03:13.320349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.119 [2024-11-27 10:03:13.332339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.119 [2024-11-27 10:03:13.332877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.119 [2024-11-27 10:03:13.332908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.119 [2024-11-27 10:03:13.332918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.119 [2024-11-27 10:03:13.333084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.119 [2024-11-27 10:03:13.333243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.119 [2024-11-27 10:03:13.333251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.119 [2024-11-27 10:03:13.333257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.119 [2024-11-27 10:03:13.333262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.119 [2024-11-27 10:03:13.344968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.119 [2024-11-27 10:03:13.345226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.119 [2024-11-27 10:03:13.345248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.119 [2024-11-27 10:03:13.345254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.119 [2024-11-27 10:03:13.345403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.119 [2024-11-27 10:03:13.345552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.119 [2024-11-27 10:03:13.345558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.119 [2024-11-27 10:03:13.345563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.119 [2024-11-27 10:03:13.345568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.119 [2024-11-27 10:03:13.357564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.119 [2024-11-27 10:03:13.357966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.119 [2024-11-27 10:03:13.357979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.119 [2024-11-27 10:03:13.357984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.119 [2024-11-27 10:03:13.358133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.119 [2024-11-27 10:03:13.358284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.119 [2024-11-27 10:03:13.358291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.119 [2024-11-27 10:03:13.358296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.119 [2024-11-27 10:03:13.358300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.119 [2024-11-27 10:03:13.370142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.119 [2024-11-27 10:03:13.370591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.119 [2024-11-27 10:03:13.370604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.119 [2024-11-27 10:03:13.370609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.119 [2024-11-27 10:03:13.370757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.119 [2024-11-27 10:03:13.370905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.119 [2024-11-27 10:03:13.370911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.119 [2024-11-27 10:03:13.370915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.119 [2024-11-27 10:03:13.370920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.119 [2024-11-27 10:03:13.382776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.119 [2024-11-27 10:03:13.383358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.119 [2024-11-27 10:03:13.383388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.119 [2024-11-27 10:03:13.383396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.119 [2024-11-27 10:03:13.383561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.119 [2024-11-27 10:03:13.383713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.119 [2024-11-27 10:03:13.383719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.119 [2024-11-27 10:03:13.383725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.119 [2024-11-27 10:03:13.383730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.119 [2024-11-27 10:03:13.395458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.119 [2024-11-27 10:03:13.395981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.119 [2024-11-27 10:03:13.396015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.119 [2024-11-27 10:03:13.396023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.119 [2024-11-27 10:03:13.396194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.119 [2024-11-27 10:03:13.396346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.119 [2024-11-27 10:03:13.396353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.119 [2024-11-27 10:03:13.396358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.119 [2024-11-27 10:03:13.396364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.119 [2024-11-27 10:03:13.408075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.119 [2024-11-27 10:03:13.408669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.119 [2024-11-27 10:03:13.408699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.119 [2024-11-27 10:03:13.408708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.119 [2024-11-27 10:03:13.408872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.119 [2024-11-27 10:03:13.409024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.119 [2024-11-27 10:03:13.409030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.119 [2024-11-27 10:03:13.409036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.119 [2024-11-27 10:03:13.409041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.119 [2024-11-27 10:03:13.420749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.119 [2024-11-27 10:03:13.421171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.119 [2024-11-27 10:03:13.421187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.119 [2024-11-27 10:03:13.421192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.119 [2024-11-27 10:03:13.421341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.119 [2024-11-27 10:03:13.421490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.119 [2024-11-27 10:03:13.421496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.120 [2024-11-27 10:03:13.421501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.120 [2024-11-27 10:03:13.421507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.120 [2024-11-27 10:03:13.433355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.120 [2024-11-27 10:03:13.433910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.120 [2024-11-27 10:03:13.433941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.120 [2024-11-27 10:03:13.433950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.120 [2024-11-27 10:03:13.434117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.120 [2024-11-27 10:03:13.434277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.120 [2024-11-27 10:03:13.434285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.120 [2024-11-27 10:03:13.434291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.120 [2024-11-27 10:03:13.434297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.120 [2024-11-27 10:03:13.445998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.120 [2024-11-27 10:03:13.446569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.120 [2024-11-27 10:03:13.446600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.120 [2024-11-27 10:03:13.446609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.120 [2024-11-27 10:03:13.446773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.120 [2024-11-27 10:03:13.446925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.120 [2024-11-27 10:03:13.446932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.120 [2024-11-27 10:03:13.446937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.120 [2024-11-27 10:03:13.446943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.120 [2024-11-27 10:03:13.458659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.120 [2024-11-27 10:03:13.459116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.120 [2024-11-27 10:03:13.459131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.120 [2024-11-27 10:03:13.459136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.120 [2024-11-27 10:03:13.459290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.120 [2024-11-27 10:03:13.459439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.120 [2024-11-27 10:03:13.459446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.120 [2024-11-27 10:03:13.459451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.120 [2024-11-27 10:03:13.459456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.120 [2024-11-27 10:03:13.471301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.120 [2024-11-27 10:03:13.471839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.120 [2024-11-27 10:03:13.471869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.120 [2024-11-27 10:03:13.471878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.120 [2024-11-27 10:03:13.472042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.120 [2024-11-27 10:03:13.472202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.120 [2024-11-27 10:03:13.472209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.120 [2024-11-27 10:03:13.472219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.120 [2024-11-27 10:03:13.472225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.120 [2024-11-27 10:03:13.483944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.120 [2024-11-27 10:03:13.484540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.120 [2024-11-27 10:03:13.484570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.120 [2024-11-27 10:03:13.484579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.120 [2024-11-27 10:03:13.484743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.120 [2024-11-27 10:03:13.484895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.120 [2024-11-27 10:03:13.484902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.120 [2024-11-27 10:03:13.484907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.120 [2024-11-27 10:03:13.484913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.120 [2024-11-27 10:03:13.496643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.120 [2024-11-27 10:03:13.497100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.120 [2024-11-27 10:03:13.497131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.120 [2024-11-27 10:03:13.497140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.120 [2024-11-27 10:03:13.497312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.120 [2024-11-27 10:03:13.497465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.120 [2024-11-27 10:03:13.497472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.120 [2024-11-27 10:03:13.497477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.120 [2024-11-27 10:03:13.497483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.120 [2024-11-27 10:03:13.509326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.120 [2024-11-27 10:03:13.509872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.120 [2024-11-27 10:03:13.509903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.120 [2024-11-27 10:03:13.509911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.120 [2024-11-27 10:03:13.510076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.120 [2024-11-27 10:03:13.510234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.120 [2024-11-27 10:03:13.510241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.120 [2024-11-27 10:03:13.510246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.120 [2024-11-27 10:03:13.510252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.120 [2024-11-27 10:03:13.521956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.120 [2024-11-27 10:03:13.522369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.120 [2024-11-27 10:03:13.522384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.120 [2024-11-27 10:03:13.522390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.121 [2024-11-27 10:03:13.522538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.121 [2024-11-27 10:03:13.522687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.121 [2024-11-27 10:03:13.522693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.121 [2024-11-27 10:03:13.522698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.121 [2024-11-27 10:03:13.522703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.121 [2024-11-27 10:03:13.534548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.121 [2024-11-27 10:03:13.534858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.121 [2024-11-27 10:03:13.534872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.121 [2024-11-27 10:03:13.534877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.121 [2024-11-27 10:03:13.535026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.121 [2024-11-27 10:03:13.535179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.121 [2024-11-27 10:03:13.535185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.121 [2024-11-27 10:03:13.535190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.121 [2024-11-27 10:03:13.535195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.121 [2024-11-27 10:03:13.547183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.121 [2024-11-27 10:03:13.547681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.121 [2024-11-27 10:03:13.547712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.121 [2024-11-27 10:03:13.547721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.121 [2024-11-27 10:03:13.547885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.121 [2024-11-27 10:03:13.548037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.121 [2024-11-27 10:03:13.548044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.121 [2024-11-27 10:03:13.548049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.121 [2024-11-27 10:03:13.548054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.121 [2024-11-27 10:03:13.559765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.121 [2024-11-27 10:03:13.560221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.121 [2024-11-27 10:03:13.560243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.121 [2024-11-27 10:03:13.560249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.121 [2024-11-27 10:03:13.560398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.121 [2024-11-27 10:03:13.560546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.121 [2024-11-27 10:03:13.560552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.121 [2024-11-27 10:03:13.560557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.121 [2024-11-27 10:03:13.560562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.121 [2024-11-27 10:03:13.572412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.121 [2024-11-27 10:03:13.572860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.121 [2024-11-27 10:03:13.572872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.121 [2024-11-27 10:03:13.572878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.121 [2024-11-27 10:03:13.573026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.121 [2024-11-27 10:03:13.573179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.121 [2024-11-27 10:03:13.573185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.121 [2024-11-27 10:03:13.573190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.121 [2024-11-27 10:03:13.573195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.385 [2024-11-27 10:03:13.585048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.385 [2024-11-27 10:03:13.585509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.385 [2024-11-27 10:03:13.585522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.385 [2024-11-27 10:03:13.585528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.385 [2024-11-27 10:03:13.585676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.385 [2024-11-27 10:03:13.585824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.385 [2024-11-27 10:03:13.585830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.385 [2024-11-27 10:03:13.585835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.385 [2024-11-27 10:03:13.585840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.385 [2024-11-27 10:03:13.597691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.385 [2024-11-27 10:03:13.598152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.385 [2024-11-27 10:03:13.598168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.385 [2024-11-27 10:03:13.598174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.385 [2024-11-27 10:03:13.598326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.385 [2024-11-27 10:03:13.598474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.385 [2024-11-27 10:03:13.598480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.385 [2024-11-27 10:03:13.598485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.385 [2024-11-27 10:03:13.598490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.385 [2024-11-27 10:03:13.610330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.385 [2024-11-27 10:03:13.610778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.385 [2024-11-27 10:03:13.610790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.385 [2024-11-27 10:03:13.610796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.385 [2024-11-27 10:03:13.610944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.385 [2024-11-27 10:03:13.611093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.385 [2024-11-27 10:03:13.611099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.385 [2024-11-27 10:03:13.611103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.385 [2024-11-27 10:03:13.611108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.385 [2024-11-27 10:03:13.622942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.385 [2024-11-27 10:03:13.623413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.385 [2024-11-27 10:03:13.623426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.385 [2024-11-27 10:03:13.623432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.385 [2024-11-27 10:03:13.623580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.385 [2024-11-27 10:03:13.623728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.385 [2024-11-27 10:03:13.623734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.385 [2024-11-27 10:03:13.623739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.385 [2024-11-27 10:03:13.623744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4063950 Killed "${NVMF_APP[@]}" "$@" 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4066112 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4066112 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 4066112 ']' 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:58.385 10:03:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:58.385 [2024-11-27 10:03:13.635621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.385 [2024-11-27 10:03:13.636027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.385 [2024-11-27 10:03:13.636039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.385 [2024-11-27 10:03:13.636045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.386 [2024-11-27 10:03:13.636197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.386 [2024-11-27 10:03:13.636347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.386 [2024-11-27 10:03:13.636353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.386 [2024-11-27 10:03:13.636359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.386 [2024-11-27 10:03:13.636364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.386 [2024-11-27 10:03:13.648211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.386 [2024-11-27 10:03:13.648785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.386 [2024-11-27 10:03:13.648814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.386 [2024-11-27 10:03:13.648823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.386 [2024-11-27 10:03:13.648987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.386 [2024-11-27 10:03:13.649151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.386 [2024-11-27 10:03:13.649164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.386 [2024-11-27 10:03:13.649171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.386 [2024-11-27 10:03:13.649176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.386 [2024-11-27 10:03:13.660871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.386 [2024-11-27 10:03:13.661508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.386 [2024-11-27 10:03:13.661539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.386 [2024-11-27 10:03:13.661548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.386 [2024-11-27 10:03:13.661712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.386 [2024-11-27 10:03:13.661863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.386 [2024-11-27 10:03:13.661874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.386 [2024-11-27 10:03:13.661879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.386 [2024-11-27 10:03:13.661885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.386 [2024-11-27 10:03:13.673448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.386 [2024-11-27 10:03:13.673919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.386 [2024-11-27 10:03:13.673933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.386 [2024-11-27 10:03:13.673939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.386 [2024-11-27 10:03:13.674088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.386 [2024-11-27 10:03:13.674241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.386 [2024-11-27 10:03:13.674248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.386 [2024-11-27 10:03:13.674253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.386 [2024-11-27 10:03:13.674258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.386 [2024-11-27 10:03:13.686107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.386 [2024-11-27 10:03:13.686431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.386 [2024-11-27 10:03:13.686445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.386 [2024-11-27 10:03:13.686450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.386 [2024-11-27 10:03:13.686599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.386 [2024-11-27 10:03:13.686748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.386 [2024-11-27 10:03:13.686753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.386 [2024-11-27 10:03:13.686758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.386 [2024-11-27 10:03:13.686763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.386 [2024-11-27 10:03:13.687332] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:30:58.386 [2024-11-27 10:03:13.687380] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.386 [2024-11-27 10:03:13.698728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.386 [2024-11-27 10:03:13.699186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.386 [2024-11-27 10:03:13.699201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.386 [2024-11-27 10:03:13.699206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.386 [2024-11-27 10:03:13.699355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.386 [2024-11-27 10:03:13.699508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.386 [2024-11-27 10:03:13.699514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.386 [2024-11-27 10:03:13.699519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.386 [2024-11-27 10:03:13.699524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.386 [2024-11-27 10:03:13.711364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.386 [2024-11-27 10:03:13.711907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.387 [2024-11-27 10:03:13.711937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.387 [2024-11-27 10:03:13.711946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.387 [2024-11-27 10:03:13.712111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.387 [2024-11-27 10:03:13.712268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.387 [2024-11-27 10:03:13.712276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.387 [2024-11-27 10:03:13.712282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.387 [2024-11-27 10:03:13.712287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.387 [2024-11-27 10:03:13.724075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.387 [2024-11-27 10:03:13.724574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.387 [2024-11-27 10:03:13.724589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.387 [2024-11-27 10:03:13.724595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.387 [2024-11-27 10:03:13.724744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.387 [2024-11-27 10:03:13.724893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.387 [2024-11-27 10:03:13.724899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.387 [2024-11-27 10:03:13.724904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.387 [2024-11-27 10:03:13.724909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.387 [2024-11-27 10:03:13.736748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.387 [2024-11-27 10:03:13.737117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.387 [2024-11-27 10:03:13.737129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.387 [2024-11-27 10:03:13.737135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.387 [2024-11-27 10:03:13.737287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.387 [2024-11-27 10:03:13.737436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.387 [2024-11-27 10:03:13.737442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.387 [2024-11-27 10:03:13.737452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.387 [2024-11-27 10:03:13.737457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.387 [2024-11-27 10:03:13.749437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.387 [2024-11-27 10:03:13.749888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.387 [2024-11-27 10:03:13.749900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.387 [2024-11-27 10:03:13.749905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.387 [2024-11-27 10:03:13.750053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.387 [2024-11-27 10:03:13.750205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.387 [2024-11-27 10:03:13.750212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.387 [2024-11-27 10:03:13.750217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.387 [2024-11-27 10:03:13.750221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.387 [2024-11-27 10:03:13.762061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.387 [2024-11-27 10:03:13.762525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.387 [2024-11-27 10:03:13.762555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.387 [2024-11-27 10:03:13.762563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.387 [2024-11-27 10:03:13.762729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.387 [2024-11-27 10:03:13.762880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.387 [2024-11-27 10:03:13.762887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.387 [2024-11-27 10:03:13.762892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.387 [2024-11-27 10:03:13.762898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.387 [2024-11-27 10:03:13.774745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.387 [2024-11-27 10:03:13.775297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.387 [2024-11-27 10:03:13.775328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.387 [2024-11-27 10:03:13.775336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.387 [2024-11-27 10:03:13.775501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.387 [2024-11-27 10:03:13.775653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.387 [2024-11-27 10:03:13.775660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.387 [2024-11-27 10:03:13.775665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.387 [2024-11-27 10:03:13.775671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.387 [2024-11-27 10:03:13.778180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:58.387 [2024-11-27 10:03:13.787390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.387 [2024-11-27 10:03:13.787930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.387 [2024-11-27 10:03:13.787960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.387 [2024-11-27 10:03:13.787969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.387 [2024-11-27 10:03:13.788135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.387 [2024-11-27 10:03:13.788293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.387 [2024-11-27 10:03:13.788300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.387 [2024-11-27 10:03:13.788306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.387 [2024-11-27 10:03:13.788312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.387 [2024-11-27 10:03:13.800038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.387 [2024-11-27 10:03:13.800445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.387 [2024-11-27 10:03:13.800460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.387 [2024-11-27 10:03:13.800466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.387 [2024-11-27 10:03:13.800615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.387 [2024-11-27 10:03:13.800764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.388 [2024-11-27 10:03:13.800769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.388 [2024-11-27 10:03:13.800774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.388 [2024-11-27 10:03:13.800780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.388 [2024-11-27 10:03:13.807425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.388 [2024-11-27 10:03:13.807446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.388 [2024-11-27 10:03:13.807452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:58.388 [2024-11-27 10:03:13.807458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:58.388 [2024-11-27 10:03:13.807463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.388 [2024-11-27 10:03:13.808552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:58.388 [2024-11-27 10:03:13.808703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.388 [2024-11-27 10:03:13.808706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:58.388 [2024-11-27 10:03:13.812629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.388 [2024-11-27 10:03:13.813091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.388 [2024-11-27 10:03:13.813104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.388 [2024-11-27 10:03:13.813109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.388 [2024-11-27 10:03:13.813263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.388 [2024-11-27 10:03:13.813417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.388 [2024-11-27 10:03:13.813423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.388 [2024-11-27 10:03:13.813428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.388 [2024-11-27 10:03:13.813433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.388 [2024-11-27 10:03:13.825280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.388 [2024-11-27 10:03:13.825880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.388 [2024-11-27 10:03:13.825914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.388 [2024-11-27 10:03:13.825923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.388 [2024-11-27 10:03:13.826092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.388 [2024-11-27 10:03:13.826250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.388 [2024-11-27 10:03:13.826258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.388 [2024-11-27 10:03:13.826263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.388 [2024-11-27 10:03:13.826270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.388 [2024-11-27 10:03:13.837974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.388 [2024-11-27 10:03:13.838474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.388 [2024-11-27 10:03:13.838490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.388 [2024-11-27 10:03:13.838495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.388 [2024-11-27 10:03:13.838645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.388 [2024-11-27 10:03:13.838793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.388 [2024-11-27 10:03:13.838799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.388 [2024-11-27 10:03:13.838804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.388 [2024-11-27 10:03:13.838809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.651 [2024-11-27 10:03:13.850661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.651 [2024-11-27 10:03:13.851161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.651 [2024-11-27 10:03:13.851176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.651 [2024-11-27 10:03:13.851182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.651 [2024-11-27 10:03:13.851331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.651 [2024-11-27 10:03:13.851480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.651 [2024-11-27 10:03:13.851487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.651 [2024-11-27 10:03:13.851496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.651 [2024-11-27 10:03:13.851502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.651 [2024-11-27 10:03:13.863337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.651 [2024-11-27 10:03:13.863800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.651 [2024-11-27 10:03:13.863812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.651 [2024-11-27 10:03:13.863817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.651 [2024-11-27 10:03:13.863965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.651 [2024-11-27 10:03:13.864114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.651 [2024-11-27 10:03:13.864119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.651 [2024-11-27 10:03:13.864124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.651 [2024-11-27 10:03:13.864129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.651 [2024-11-27 10:03:13.875962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.651 [2024-11-27 10:03:13.876326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.651 [2024-11-27 10:03:13.876341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.651 [2024-11-27 10:03:13.876347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.651 [2024-11-27 10:03:13.876496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.651 [2024-11-27 10:03:13.876645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.651 [2024-11-27 10:03:13.876650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.651 [2024-11-27 10:03:13.876655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.651 [2024-11-27 10:03:13.876660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.651 [2024-11-27 10:03:13.888569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.651 [2024-11-27 10:03:13.889101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.651 [2024-11-27 10:03:13.889133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.651 [2024-11-27 10:03:13.889142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.651 [2024-11-27 10:03:13.889314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.651 [2024-11-27 10:03:13.889467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.651 [2024-11-27 10:03:13.889473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.651 [2024-11-27 10:03:13.889479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.651 [2024-11-27 10:03:13.889485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.651 [2024-11-27 10:03:13.901208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.651 [2024-11-27 10:03:13.901788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.651 [2024-11-27 10:03:13.901819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.651 [2024-11-27 10:03:13.901828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.651 [2024-11-27 10:03:13.901993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.651 [2024-11-27 10:03:13.902145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.651 [2024-11-27 10:03:13.902151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.651 [2024-11-27 10:03:13.902156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.651 [2024-11-27 10:03:13.902170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.651 [2024-11-27 10:03:13.913864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.651 [2024-11-27 10:03:13.914421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.651 [2024-11-27 10:03:13.914451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.651 [2024-11-27 10:03:13.914460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.651 [2024-11-27 10:03:13.914624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.651 [2024-11-27 10:03:13.914776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.651 [2024-11-27 10:03:13.914783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.651 [2024-11-27 10:03:13.914788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.651 [2024-11-27 10:03:13.914794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.651 [2024-11-27 10:03:13.926500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.651 [2024-11-27 10:03:13.927032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.651 [2024-11-27 10:03:13.927062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.652 [2024-11-27 10:03:13.927071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.652 [2024-11-27 10:03:13.927241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.652 [2024-11-27 10:03:13.927393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.652 [2024-11-27 10:03:13.927400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.652 [2024-11-27 10:03:13.927405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.652 [2024-11-27 10:03:13.927411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.652 [2024-11-27 10:03:13.939118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.652 [2024-11-27 10:03:13.939646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.652 [2024-11-27 10:03:13.939676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.652 [2024-11-27 10:03:13.939688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.652 [2024-11-27 10:03:13.939853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.652 [2024-11-27 10:03:13.940005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.652 [2024-11-27 10:03:13.940011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.652 [2024-11-27 10:03:13.940017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.652 [2024-11-27 10:03:13.940022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.652 [2024-11-27 10:03:13.951735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.652 [2024-11-27 10:03:13.952385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.652 [2024-11-27 10:03:13.952416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.652 [2024-11-27 10:03:13.952425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.652 [2024-11-27 10:03:13.952590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.652 [2024-11-27 10:03:13.952742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.652 [2024-11-27 10:03:13.952748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.652 [2024-11-27 10:03:13.952754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.652 [2024-11-27 10:03:13.952760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.652 [2024-11-27 10:03:13.964325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.652 [2024-11-27 10:03:13.964886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.652 [2024-11-27 10:03:13.964916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.652 [2024-11-27 10:03:13.964925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.652 [2024-11-27 10:03:13.965090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.652 [2024-11-27 10:03:13.965248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.652 [2024-11-27 10:03:13.965256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.652 [2024-11-27 10:03:13.965262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.652 [2024-11-27 10:03:13.965267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.652 [2024-11-27 10:03:13.976972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.652 [2024-11-27 10:03:13.977567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.652 [2024-11-27 10:03:13.977598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.652 [2024-11-27 10:03:13.977607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.652 [2024-11-27 10:03:13.977772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.652 [2024-11-27 10:03:13.977928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.652 [2024-11-27 10:03:13.977934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.652 [2024-11-27 10:03:13.977940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.652 [2024-11-27 10:03:13.977945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.652 [2024-11-27 10:03:13.989667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.652 [2024-11-27 10:03:13.990244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.652 [2024-11-27 10:03:13.990274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.652 [2024-11-27 10:03:13.990283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.652 [2024-11-27 10:03:13.990448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.652 [2024-11-27 10:03:13.990600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.652 [2024-11-27 10:03:13.990607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.652 [2024-11-27 10:03:13.990613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.652 [2024-11-27 10:03:13.990618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.652 4707.67 IOPS, 18.39 MiB/s [2024-11-27T09:03:14.118Z] [2024-11-27 10:03:14.002347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.652 [2024-11-27 10:03:14.002927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.652 [2024-11-27 10:03:14.002957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.652 [2024-11-27 10:03:14.002966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.652 [2024-11-27 10:03:14.003131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.652 [2024-11-27 10:03:14.003289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.652 [2024-11-27 10:03:14.003296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.652 [2024-11-27 10:03:14.003302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.652 [2024-11-27 10:03:14.003307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.652 [2024-11-27 10:03:14.015028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.652 [2024-11-27 10:03:14.015488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.652 [2024-11-27 10:03:14.015519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.652 [2024-11-27 10:03:14.015528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.652 [2024-11-27 10:03:14.015692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.652 [2024-11-27 10:03:14.015844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.652 [2024-11-27 10:03:14.015850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.652 [2024-11-27 10:03:14.015859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.652 [2024-11-27 10:03:14.015865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.652 [2024-11-27 10:03:14.027712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.652 [2024-11-27 10:03:14.028165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.652 [2024-11-27 10:03:14.028196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.652 [2024-11-27 10:03:14.028204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.652 [2024-11-27 10:03:14.028371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.652 [2024-11-27 10:03:14.028523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.652 [2024-11-27 10:03:14.028530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.652 [2024-11-27 10:03:14.028535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.652 [2024-11-27 10:03:14.028540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.652 [2024-11-27 10:03:14.040388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.652 [2024-11-27 10:03:14.040961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.652 [2024-11-27 10:03:14.040991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.652 [2024-11-27 10:03:14.041000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.652 [2024-11-27 10:03:14.041172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.652 [2024-11-27 10:03:14.041325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.652 [2024-11-27 10:03:14.041331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.652 [2024-11-27 10:03:14.041337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.652 [2024-11-27 10:03:14.041342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.652 [2024-11-27 10:03:14.053038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.653 [2024-11-27 10:03:14.053604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.653 [2024-11-27 10:03:14.053635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.653 [2024-11-27 10:03:14.053644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.653 [2024-11-27 10:03:14.053808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.653 [2024-11-27 10:03:14.053960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.653 [2024-11-27 10:03:14.053967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.653 [2024-11-27 10:03:14.053972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.653 [2024-11-27 10:03:14.053978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.653 [2024-11-27 10:03:14.065688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.653 [2024-11-27 10:03:14.066253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.653 [2024-11-27 10:03:14.066283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.653 [2024-11-27 10:03:14.066292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.653 [2024-11-27 10:03:14.066459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.653 [2024-11-27 10:03:14.066611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.653 [2024-11-27 10:03:14.066618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.653 [2024-11-27 10:03:14.066624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.653 [2024-11-27 10:03:14.066630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.653 [2024-11-27 10:03:14.078347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.653 [2024-11-27 10:03:14.078793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.653 [2024-11-27 10:03:14.078823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.653 [2024-11-27 10:03:14.078832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.653 [2024-11-27 10:03:14.078997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.653 [2024-11-27 10:03:14.079149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.653 [2024-11-27 10:03:14.079156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.653 [2024-11-27 10:03:14.079173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.653 [2024-11-27 10:03:14.079179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.653 [2024-11-27 10:03:14.091018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.653 [2024-11-27 10:03:14.091616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.653 [2024-11-27 10:03:14.091647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.653 [2024-11-27 10:03:14.091655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.653 [2024-11-27 10:03:14.091820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.653 [2024-11-27 10:03:14.091972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.653 [2024-11-27 10:03:14.091979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.653 [2024-11-27 10:03:14.091984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.653 [2024-11-27 10:03:14.091990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.653 [2024-11-27 10:03:14.103704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.653 [2024-11-27 10:03:14.104268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.653 [2024-11-27 10:03:14.104299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.653 [2024-11-27 10:03:14.104311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.653 [2024-11-27 10:03:14.104475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.653 [2024-11-27 10:03:14.104627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.653 [2024-11-27 10:03:14.104633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.653 [2024-11-27 10:03:14.104638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.653 [2024-11-27 10:03:14.104644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.966 [2024-11-27 10:03:14.116349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.966 [2024-11-27 10:03:14.116824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.966 [2024-11-27 10:03:14.116839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.966 [2024-11-27 10:03:14.116844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.966 [2024-11-27 10:03:14.116993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.966 [2024-11-27 10:03:14.117142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.966 [2024-11-27 10:03:14.117147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.966 [2024-11-27 10:03:14.117152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.966 [2024-11-27 10:03:14.117157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.966 [2024-11-27 10:03:14.128999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.966 [2024-11-27 10:03:14.129348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.966 [2024-11-27 10:03:14.129361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.966 [2024-11-27 10:03:14.129366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.966 [2024-11-27 10:03:14.129514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.966 [2024-11-27 10:03:14.129662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.966 [2024-11-27 10:03:14.129668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.966 [2024-11-27 10:03:14.129673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.966 [2024-11-27 10:03:14.129677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.966 [2024-11-27 10:03:14.141656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.966 [2024-11-27 10:03:14.142116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.966 [2024-11-27 10:03:14.142128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.966 [2024-11-27 10:03:14.142135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.966 [2024-11-27 10:03:14.142288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.966 [2024-11-27 10:03:14.142441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.966 [2024-11-27 10:03:14.142447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.966 [2024-11-27 10:03:14.142452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.966 [2024-11-27 10:03:14.142457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.966 [2024-11-27 10:03:14.154295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.966 [2024-11-27 10:03:14.154767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.966 [2024-11-27 10:03:14.154779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.966 [2024-11-27 10:03:14.154785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.966 [2024-11-27 10:03:14.154933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.966 [2024-11-27 10:03:14.155081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.966 [2024-11-27 10:03:14.155087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.966 [2024-11-27 10:03:14.155092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.966 [2024-11-27 10:03:14.155097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.966 [2024-11-27 10:03:14.166934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.966 [2024-11-27 10:03:14.167504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.966 [2024-11-27 10:03:14.167534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.966 [2024-11-27 10:03:14.167543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.966 [2024-11-27 10:03:14.167708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.966 [2024-11-27 10:03:14.167860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.966 [2024-11-27 10:03:14.167867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.966 [2024-11-27 10:03:14.167872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.966 [2024-11-27 10:03:14.167877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.966 [2024-11-27 10:03:14.179577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.966 [2024-11-27 10:03:14.180011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.966 [2024-11-27 10:03:14.180025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.967 [2024-11-27 10:03:14.180031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.967 [2024-11-27 10:03:14.180191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.967 [2024-11-27 10:03:14.180340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.967 [2024-11-27 10:03:14.180346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.967 [2024-11-27 10:03:14.180355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.967 [2024-11-27 10:03:14.180360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.967 [2024-11-27 10:03:14.192193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.967 [2024-11-27 10:03:14.192741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.967 [2024-11-27 10:03:14.192771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.967 [2024-11-27 10:03:14.192779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.967 [2024-11-27 10:03:14.192944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.967 [2024-11-27 10:03:14.193096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.967 [2024-11-27 10:03:14.193103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.967 [2024-11-27 10:03:14.193109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.967 [2024-11-27 10:03:14.193115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.967 [2024-11-27 10:03:14.204824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.967 [2024-11-27 10:03:14.205404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.967 [2024-11-27 10:03:14.205435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.967 [2024-11-27 10:03:14.205444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.967 [2024-11-27 10:03:14.205610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.967 [2024-11-27 10:03:14.205762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.967 [2024-11-27 10:03:14.205769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.967 [2024-11-27 10:03:14.205774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.967 [2024-11-27 10:03:14.205779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.967 [2024-11-27 10:03:14.217482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.967 [2024-11-27 10:03:14.218044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.967 [2024-11-27 10:03:14.218074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.967 [2024-11-27 10:03:14.218083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.967 [2024-11-27 10:03:14.218254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.967 [2024-11-27 10:03:14.218407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.967 [2024-11-27 10:03:14.218413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.967 [2024-11-27 10:03:14.218418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.967 [2024-11-27 10:03:14.218424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.967 [2024-11-27 10:03:14.230119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.967 [2024-11-27 10:03:14.230469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.967 [2024-11-27 10:03:14.230485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.967 [2024-11-27 10:03:14.230490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.967 [2024-11-27 10:03:14.230639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.967 [2024-11-27 10:03:14.230788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.967 [2024-11-27 10:03:14.230794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.967 [2024-11-27 10:03:14.230799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.967 [2024-11-27 10:03:14.230804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.967 [2024-11-27 10:03:14.242782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.967 [2024-11-27 10:03:14.243251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.967 [2024-11-27 10:03:14.243281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.967 [2024-11-27 10:03:14.243290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.967 [2024-11-27 10:03:14.243455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.967 [2024-11-27 10:03:14.243607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.967 [2024-11-27 10:03:14.243613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.967 [2024-11-27 10:03:14.243619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.967 [2024-11-27 10:03:14.243625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.967 [2024-11-27 10:03:14.255475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.967 [2024-11-27 10:03:14.256031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.967 [2024-11-27 10:03:14.256061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.967 [2024-11-27 10:03:14.256070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.967 [2024-11-27 10:03:14.256241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.967 [2024-11-27 10:03:14.256393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.967 [2024-11-27 10:03:14.256399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.967 [2024-11-27 10:03:14.256405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.967 [2024-11-27 10:03:14.256411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.967 [2024-11-27 10:03:14.268103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.967 [2024-11-27 10:03:14.268652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.967 [2024-11-27 10:03:14.268683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.967 [2024-11-27 10:03:14.268695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.967 [2024-11-27 10:03:14.268860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.967 [2024-11-27 10:03:14.269012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.967 [2024-11-27 10:03:14.269018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.967 [2024-11-27 10:03:14.269024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.967 [2024-11-27 10:03:14.269029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.967 [2024-11-27 10:03:14.280741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.967 [2024-11-27 10:03:14.281217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.967 [2024-11-27 10:03:14.281248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.967 [2024-11-27 10:03:14.281254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.967 [2024-11-27 10:03:14.281408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.967 [2024-11-27 10:03:14.281558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.967 [2024-11-27 10:03:14.281564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.967 [2024-11-27 10:03:14.281569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.967 [2024-11-27 10:03:14.281574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.967 [2024-11-27 10:03:14.293434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.967 [2024-11-27 10:03:14.293966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.967 [2024-11-27 10:03:14.293997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.967 [2024-11-27 10:03:14.294006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.967 [2024-11-27 10:03:14.294176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.967 [2024-11-27 10:03:14.294330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.967 [2024-11-27 10:03:14.294336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.967 [2024-11-27 10:03:14.294342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.967 [2024-11-27 10:03:14.294347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.968 [2024-11-27 10:03:14.306056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.968 [2024-11-27 10:03:14.306607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.968 [2024-11-27 10:03:14.306638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.968 [2024-11-27 10:03:14.306647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.968 [2024-11-27 10:03:14.306812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.968 [2024-11-27 10:03:14.306969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.968 [2024-11-27 10:03:14.306975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.968 [2024-11-27 10:03:14.306981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.968 [2024-11-27 10:03:14.306987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.968 [2024-11-27 10:03:14.318709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.968 [2024-11-27 10:03:14.319281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.968 [2024-11-27 10:03:14.319311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.968 [2024-11-27 10:03:14.319320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.968 [2024-11-27 10:03:14.319487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.968 [2024-11-27 10:03:14.319638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.968 [2024-11-27 10:03:14.319645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.968 [2024-11-27 10:03:14.319650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.968 [2024-11-27 10:03:14.319656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.968 [2024-11-27 10:03:14.331376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.968 [2024-11-27 10:03:14.331943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.968 [2024-11-27 10:03:14.331973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.968 [2024-11-27 10:03:14.331982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.968 [2024-11-27 10:03:14.332147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.968 [2024-11-27 10:03:14.332305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.968 [2024-11-27 10:03:14.332312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.968 [2024-11-27 10:03:14.332318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.968 [2024-11-27 10:03:14.332323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.968 [2024-11-27 10:03:14.344060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.968 [2024-11-27 10:03:14.344637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.968 [2024-11-27 10:03:14.344667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.968 [2024-11-27 10:03:14.344676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.968 [2024-11-27 10:03:14.344840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.968 [2024-11-27 10:03:14.344993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.968 [2024-11-27 10:03:14.344999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.968 [2024-11-27 10:03:14.345008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.968 [2024-11-27 10:03:14.345014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.968 [2024-11-27 10:03:14.356717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.968 [2024-11-27 10:03:14.357219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.968 [2024-11-27 10:03:14.357249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.968 [2024-11-27 10:03:14.357258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.968 [2024-11-27 10:03:14.357426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.968 [2024-11-27 10:03:14.357578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.968 [2024-11-27 10:03:14.357585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.968 [2024-11-27 10:03:14.357591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.968 [2024-11-27 10:03:14.357596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.968 [2024-11-27 10:03:14.369301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.968 [2024-11-27 10:03:14.369859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.968 [2024-11-27 10:03:14.369889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.968 [2024-11-27 10:03:14.369898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.968 [2024-11-27 10:03:14.370063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.968 [2024-11-27 10:03:14.370220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.968 [2024-11-27 10:03:14.370228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.968 [2024-11-27 10:03:14.370234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.968 [2024-11-27 10:03:14.370240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:58.968 [2024-11-27 10:03:14.381940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:58.968 [2024-11-27 10:03:14.382286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.968 [2024-11-27 10:03:14.382302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:58.968 [2024-11-27 10:03:14.382308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:58.968 [2024-11-27 10:03:14.382457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:58.968 [2024-11-27 10:03:14.382605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:58.968 [2024-11-27 10:03:14.382611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:58.968 [2024-11-27 10:03:14.382616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:58.968 [2024-11-27 10:03:14.382621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.271 [2024-11-27 10:03:14.394634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.271 [2024-11-27 10:03:14.395051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.271 [2024-11-27 10:03:14.395064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.271 [2024-11-27 10:03:14.395070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.271 [2024-11-27 10:03:14.395222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.271 [2024-11-27 10:03:14.395372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.271 [2024-11-27 10:03:14.395378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.271 [2024-11-27 10:03:14.395383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.271 [2024-11-27 10:03:14.395388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.271 [2024-11-27 10:03:14.407246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.271 [2024-11-27 10:03:14.407660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.271 [2024-11-27 10:03:14.407691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.271 [2024-11-27 10:03:14.407700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.271 [2024-11-27 10:03:14.407864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.271 [2024-11-27 10:03:14.408016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.271 [2024-11-27 10:03:14.408024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.271 [2024-11-27 10:03:14.408030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.271 [2024-11-27 10:03:14.408036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.271 [2024-11-27 10:03:14.419898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.271 [2024-11-27 10:03:14.420470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.271 [2024-11-27 10:03:14.420500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.271 [2024-11-27 10:03:14.420509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.271 [2024-11-27 10:03:14.420674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.271 [2024-11-27 10:03:14.420825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.271 [2024-11-27 10:03:14.420832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.271 [2024-11-27 10:03:14.420838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.271 [2024-11-27 10:03:14.420843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.271 [2024-11-27 10:03:14.432548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.271 [2024-11-27 10:03:14.432901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.271 [2024-11-27 10:03:14.432916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.271 [2024-11-27 10:03:14.432926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.271 [2024-11-27 10:03:14.433074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.272 [2024-11-27 10:03:14.433229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.272 [2024-11-27 10:03:14.433235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.272 [2024-11-27 10:03:14.433240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.272 [2024-11-27 10:03:14.433245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.272 [2024-11-27 10:03:14.445232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.272 [2024-11-27 10:03:14.445570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.272 [2024-11-27 10:03:14.445584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.272 [2024-11-27 10:03:14.445590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.272 [2024-11-27 10:03:14.445739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.272 [2024-11-27 10:03:14.445887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.272 [2024-11-27 10:03:14.445893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.272 [2024-11-27 10:03:14.445899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.272 [2024-11-27 10:03:14.445904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.272 [2024-11-27 10:03:14.457891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.272 [2024-11-27 10:03:14.458239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.272 [2024-11-27 10:03:14.458253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.272 [2024-11-27 10:03:14.458258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.272 [2024-11-27 10:03:14.458407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.272 [2024-11-27 10:03:14.458555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.272 [2024-11-27 10:03:14.458561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.272 [2024-11-27 10:03:14.458566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.272 [2024-11-27 10:03:14.458570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.272 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.272 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:59.272 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:59.272 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:59.272 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:59.272 [2024-11-27 10:03:14.470552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.272 [2024-11-27 10:03:14.471078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.272 [2024-11-27 10:03:14.471112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.272 [2024-11-27 10:03:14.471121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.272 [2024-11-27 10:03:14.471294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.272 [2024-11-27 10:03:14.471446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.272 [2024-11-27 10:03:14.471453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.272 [2024-11-27 10:03:14.471458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.272 [2024-11-27 10:03:14.471464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.272 [2024-11-27 10:03:14.483178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.272 [2024-11-27 10:03:14.483712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.272 [2024-11-27 10:03:14.483742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.272 [2024-11-27 10:03:14.483751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.272 [2024-11-27 10:03:14.483915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.272 [2024-11-27 10:03:14.484067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.272 [2024-11-27 10:03:14.484074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.272 [2024-11-27 10:03:14.484079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.272 [2024-11-27 10:03:14.484085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.272 [2024-11-27 10:03:14.495794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.272 [2024-11-27 10:03:14.496114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.272 [2024-11-27 10:03:14.496128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.272 [2024-11-27 10:03:14.496134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.272 [2024-11-27 10:03:14.496287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.272 [2024-11-27 10:03:14.496436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.272 [2024-11-27 10:03:14.496441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.272 [2024-11-27 10:03:14.496446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.272 [2024-11-27 10:03:14.496451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.272 [2024-11-27 10:03:14.508454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.272 [2024-11-27 10:03:14.508940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.272 [2024-11-27 10:03:14.508971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.272 [2024-11-27 10:03:14.508980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.272 [2024-11-27 10:03:14.509149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.272 [2024-11-27 10:03:14.509307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.272 [2024-11-27 10:03:14.509314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.272 [2024-11-27 10:03:14.509320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.272 [2024-11-27 10:03:14.509326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.272 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.272 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:59.272 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.272 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:59.272 [2024-11-27 10:03:14.516046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.272 [2024-11-27 10:03:14.521042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.272 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.272 [2024-11-27 10:03:14.521621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.272 [2024-11-27 10:03:14.521651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.272 [2024-11-27 10:03:14.521660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.272 [2024-11-27 10:03:14.521824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.272 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:59.272 [2024-11-27 10:03:14.521977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.272 [2024-11-27 10:03:14.521984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.272 [2024-11-27 10:03:14.521990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.272 [2024-11-27 10:03:14.521996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.272 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.272 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:59.272 [2024-11-27 10:03:14.533698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.272 [2024-11-27 10:03:14.534260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.272 [2024-11-27 10:03:14.534291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.272 [2024-11-27 10:03:14.534299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.272 [2024-11-27 10:03:14.534466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.272 [2024-11-27 10:03:14.534619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.272 [2024-11-27 10:03:14.534625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.272 [2024-11-27 10:03:14.534630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.272 [2024-11-27 10:03:14.534640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.272 [2024-11-27 10:03:14.546346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.273 [2024-11-27 10:03:14.546909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.273 [2024-11-27 10:03:14.546939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.273 [2024-11-27 10:03:14.546948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.273 [2024-11-27 10:03:14.547112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.273 [2024-11-27 10:03:14.547270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.273 [2024-11-27 10:03:14.547278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.273 [2024-11-27 10:03:14.547284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.273 [2024-11-27 10:03:14.547290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.273 Malloc0 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:59.273 [2024-11-27 10:03:14.558986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.273 [2024-11-27 10:03:14.559472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.273 [2024-11-27 10:03:14.559503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.273 [2024-11-27 10:03:14.559512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.273 [2024-11-27 10:03:14.559676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.273 [2024-11-27 10:03:14.559829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.273 [2024-11-27 10:03:14.559835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.273 [2024-11-27 10:03:14.559841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.273 [2024-11-27 10:03:14.559847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:59.273 [2024-11-27 10:03:14.571687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.273 [2024-11-27 10:03:14.572163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.273 [2024-11-27 10:03:14.572178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f5000 with addr=10.0.0.2, port=4420 00:30:59.273 [2024-11-27 10:03:14.572184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5000 is same with the state(6) to be set 00:30:59.273 [2024-11-27 10:03:14.572333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f5000 (9): Bad file descriptor 00:30:59.273 [2024-11-27 10:03:14.572486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:59.273 [2024-11-27 10:03:14.572492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:59.273 [2024-11-27 10:03:14.572497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:59.273 [2024-11-27 10:03:14.572502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:59.273 [2024-11-27 10:03:14.579876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.273 [2024-11-27 10:03:14.584354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.273 10:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4064332 00:30:59.273 [2024-11-27 10:03:14.618957] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:31:00.970 4739.71 IOPS, 18.51 MiB/s [2024-11-27T09:03:17.008Z] 5778.12 IOPS, 22.57 MiB/s [2024-11-27T09:03:18.390Z] 6558.78 IOPS, 25.62 MiB/s [2024-11-27T09:03:19.339Z] 7190.90 IOPS, 28.09 MiB/s [2024-11-27T09:03:20.289Z] 7710.91 IOPS, 30.12 MiB/s [2024-11-27T09:03:21.230Z] 8134.17 IOPS, 31.77 MiB/s [2024-11-27T09:03:22.170Z] 8499.15 IOPS, 33.20 MiB/s [2024-11-27T09:03:23.111Z] 8829.79 IOPS, 34.49 MiB/s [2024-11-27T09:03:23.111Z] 9102.27 IOPS, 35.56 MiB/s 00:31:07.645 Latency(us) 00:31:07.645 [2024-11-27T09:03:23.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.646 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:07.646 Verification LBA range: start 0x0 length 0x4000 00:31:07.646 Nvme1n1 : 15.01 9103.07 35.56 13254.78 0.00 5705.85 546.13 14527.15 00:31:07.646 [2024-11-27T09:03:23.112Z] =================================================================================================================== 00:31:07.646 [2024-11-27T09:03:23.112Z] Total : 9103.07 35.56 13254.78 0.00 5705.85 546.13 14527.15 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.907 rmmod nvme_tcp 00:31:07.907 rmmod nvme_fabrics 00:31:07.907 rmmod nvme_keyring 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 4066112 ']' 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 4066112 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 4066112 ']' 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 4066112 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4066112 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4066112' 00:31:07.907 killing process with pid 4066112 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 4066112 00:31:07.907 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 4066112 00:31:08.168 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:08.168 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:08.168 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:08.168 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:31:08.168 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:31:08.169 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:08.169 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:31:08.169 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:08.169 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:08.169 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.169 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.169 10:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.080 10:03:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:10.080 00:31:10.080 real 0m28.299s 00:31:10.080 user 1m3.105s 00:31:10.080 sys 0m7.805s 00:31:10.080 10:03:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:10.080 10:03:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:10.081 ************************************ 00:31:10.081 END TEST nvmf_bdevperf 00:31:10.081 ************************************ 00:31:10.081 10:03:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:10.081 10:03:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:10.081 10:03:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:10.081 10:03:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.342 ************************************ 00:31:10.342 START TEST nvmf_target_disconnect 00:31:10.342 ************************************ 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:10.342 * Looking for test storage... 00:31:10.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:10.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.342 --rc genhtml_branch_coverage=1 00:31:10.342 --rc genhtml_function_coverage=1 00:31:10.342 --rc genhtml_legend=1 00:31:10.342 --rc geninfo_all_blocks=1 00:31:10.342 --rc geninfo_unexecuted_blocks=1 00:31:10.342 00:31:10.342 ' 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:10.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.342 --rc genhtml_branch_coverage=1 00:31:10.342 --rc genhtml_function_coverage=1 00:31:10.342 --rc genhtml_legend=1 00:31:10.342 --rc geninfo_all_blocks=1 00:31:10.342 --rc geninfo_unexecuted_blocks=1 00:31:10.342 00:31:10.342 ' 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:10.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.342 --rc genhtml_branch_coverage=1 00:31:10.342 --rc genhtml_function_coverage=1 00:31:10.342 --rc genhtml_legend=1 00:31:10.342 --rc geninfo_all_blocks=1 00:31:10.342 --rc geninfo_unexecuted_blocks=1 00:31:10.342 00:31:10.342 ' 00:31:10.342 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:10.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.342 --rc genhtml_branch_coverage=1 00:31:10.342 --rc genhtml_function_coverage=1 00:31:10.342 --rc genhtml_legend=1 00:31:10.342 --rc geninfo_all_blocks=1 00:31:10.342 --rc geninfo_unexecuted_blocks=1 00:31:10.342 00:31:10.342 ' 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.343 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:10.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:31:10.604 10:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:18.752 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:18.753 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:18.753 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:18.753 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:18.753 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:18.753 10:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:18.753 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:18.753 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:18.753 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:18.753 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:18.753 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:18.753 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:18.753 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:18.753 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:18.753 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:18.753 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:18.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:18.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:31:18.753 00:31:18.753 --- 10.0.0.2 ping statistics --- 00:31:18.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.753 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:31:18.753 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:18.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:18.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:31:18.754 00:31:18.754 --- 10.0.0.1 ping statistics --- 00:31:18.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.754 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:18.754 ************************************ 00:31:18.754 START TEST nvmf_target_disconnect_tc1 00:31:18.754 ************************************ 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:18.754 [2024-11-27 10:03:33.515795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.754 [2024-11-27 10:03:33.515894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2452ad0 with addr=10.0.0.2, port=4420 00:31:18.754 [2024-11-27 10:03:33.515922] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:18.754 [2024-11-27 10:03:33.515941] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:18.754 [2024-11-27 10:03:33.515950] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:31:18.754 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:18.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:18.754 Initializing NVMe Controllers 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:18.754 00:31:18.754 real 0m0.141s 00:31:18.754 user 0m0.062s 00:31:18.754 sys 0m0.080s 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:18.754 ************************************ 00:31:18.754 END TEST nvmf_target_disconnect_tc1 00:31:18.754 ************************************ 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:18.754 ************************************ 00:31:18.754 START TEST nvmf_target_disconnect_tc2 00:31:18.754 ************************************ 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4072159 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4072159 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4072159 ']' 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:18.754 10:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.754 [2024-11-27 10:03:33.677339] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:31:18.754 [2024-11-27 10:03:33.677396] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:18.754 [2024-11-27 10:03:33.777201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:18.754 [2024-11-27 10:03:33.829644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:18.755 [2024-11-27 10:03:33.829693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:18.755 [2024-11-27 10:03:33.829702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:18.755 [2024-11-27 10:03:33.829709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:18.755 [2024-11-27 10:03:33.829716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:18.755 [2024-11-27 10:03:33.831740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:18.755 [2024-11-27 10:03:33.831902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:18.755 [2024-11-27 10:03:33.832064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:18.755 [2024-11-27 10:03:33.832064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:19.328 Malloc0 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:19.328 [2024-11-27 10:03:34.585646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:19.328 [2024-11-27 10:03:34.626069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=4072323 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:19.328 10:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:21.246 10:03:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 4072159 00:31:21.246 10:03:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Write completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Write completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Write completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Write completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Write completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Write completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Write completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Write completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Write completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Read completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.246 Write completed with error (sct=0, sc=8) 00:31:21.246 starting I/O failed 00:31:21.247 Write completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Write completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Write completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Write completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 [2024-11-27 10:03:36.664681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Write completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Write completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Write completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Write completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Write completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Write completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Write completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Read completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Write completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 Write completed with error (sct=0, sc=8) 00:31:21.247 starting I/O failed 00:31:21.247 [2024-11-27 10:03:36.665056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:21.247 [2024-11-27 10:03:36.665582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.665650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.666017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.666034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.666481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.666541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.666902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.666919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.667151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.667179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.667465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.667480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.667802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.667817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.668144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.668164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.668439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.668455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.668758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.668773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.669118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.669132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.669500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.669514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.669878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.669892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.670249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.670265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.670618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.670636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.670743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.670757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.671087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.671101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.671408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.671424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.671783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.671799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.672121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.672137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.672507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.672523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.672878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.672894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.673188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.673203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.673607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.673620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.247 [2024-11-27 10:03:36.673833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.247 [2024-11-27 10:03:36.673848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.247 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.674149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.674170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.674538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.674552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.674896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.674912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.675140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.675156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.675465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.675479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.675833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.675847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.676196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.676210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.676535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.676558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.676874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.676889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.677077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.677094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.677503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.677517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.677817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.677832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.678060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.678075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.678408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.678422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.678741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.678755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.679087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.679102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.679391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.679406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.679750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.679765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.680083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.680096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.680436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.680451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.680789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.680804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.681175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.681191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.681558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.681570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.681868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.681880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.682143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.682155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.682512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.682526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.682864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.682877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.683218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.683232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.683549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.683563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.683875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.683891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.684197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.684212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.684418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.684431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.684720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.684735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.685036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.685050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.685354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.685368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.685685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.685699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.686013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.686028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.248 [2024-11-27 10:03:36.686383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.248 [2024-11-27 10:03:36.686398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.248 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.686709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.686723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.687037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.687050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.687415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.687432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.687749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.687763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.688071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.688086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.688389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.688403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.688797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.688812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.689144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.689172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.689506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.689521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.689832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.689849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.690170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.690186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.690545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.690561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.690876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.690890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.691243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.691256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.691620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.691633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.691947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.691960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.692309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.692323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.692639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.692656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.693006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.693021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.693345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.693359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.693689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.693703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.694012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.694025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.694357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.694371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.694674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.694690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.695029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.695048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.695295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.695311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.695653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.695670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.695993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.696010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.696204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.696222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.696564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.696581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.696904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.696924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.697226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.697244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.697592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.697609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.697930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.697947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.698240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.698258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.698595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.698612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.698925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.698946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.249 [2024-11-27 10:03:36.699190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.249 [2024-11-27 10:03:36.699208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.249 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.699538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.699556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.699881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.699898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.700222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.700239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.700597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.700614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.700914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.700930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.701234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.701252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.701613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.701632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.701958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.701976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.702303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.702321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.702666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.702685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.702998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.703016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.703352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.703374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.703681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.703698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.704021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.704040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.704427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.704451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.704762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.704778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.705105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.705126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.705447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.705468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.705833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.705855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.706237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.706261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.706605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.706627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.706965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.706988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.707253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.707275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.707645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.707667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.708024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.708048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.708357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.708381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.708713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.708735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.709100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.709122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.709472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.709493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.250 qpair failed and we were unable to recover it. 00:31:21.250 [2024-11-27 10:03:36.709833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.250 [2024-11-27 10:03:36.709854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.530 qpair failed and we were unable to recover it. 00:31:21.530 [2024-11-27 10:03:36.710204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.530 [2024-11-27 10:03:36.710230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.530 qpair failed and we were unable to recover it. 00:31:21.530 [2024-11-27 10:03:36.710539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.530 [2024-11-27 10:03:36.710563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.530 qpair failed and we were unable to recover it. 00:31:21.530 [2024-11-27 10:03:36.710906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.530 [2024-11-27 10:03:36.710927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.530 qpair failed and we were unable to recover it. 00:31:21.530 [2024-11-27 10:03:36.711198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.530 [2024-11-27 10:03:36.711220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.530 qpair failed and we were unable to recover it. 00:31:21.530 [2024-11-27 10:03:36.711568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.530 [2024-11-27 10:03:36.711591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.530 qpair failed and we were unable to recover it. 00:31:21.530 [2024-11-27 10:03:36.711961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.530 [2024-11-27 10:03:36.711982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.530 qpair failed and we were unable to recover it. 00:31:21.530 [2024-11-27 10:03:36.712306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.530 [2024-11-27 10:03:36.712329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.530 qpair failed and we were unable to recover it. 00:31:21.530 [2024-11-27 10:03:36.712663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.712684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.713012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.713037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.713273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.713298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.713644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.713667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.714005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.714026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.714463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.714486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.714831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.714853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.715190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.715214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.715591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.715615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.715961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.715982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.716316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.716340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.716682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.716704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.717039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.717062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.717396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.717419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.717781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.717817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.718183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.718212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.718528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.718555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.718918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.718947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.719311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.719338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.719742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.719771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.720128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.720171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.720534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.720562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.720923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.720951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.721316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.721346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.721713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.721741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.722110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.722137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.722484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.722512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.722889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.722916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.723279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.723308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.723667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.723695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.724058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.724087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.724438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.724465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.724826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.724854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.725218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.725248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.725598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.725626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.725991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.726022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.726389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.726418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.726774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.531 [2024-11-27 10:03:36.726803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.531 qpair failed and we were unable to recover it. 00:31:21.531 [2024-11-27 10:03:36.727172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.727201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.727552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.727580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.727913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.727940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.728299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.728329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.728692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.728721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.729079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.729108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.729486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.729516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.729867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.729897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.730247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.730281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.730645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.730677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.731039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.731071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.731409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.731442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.731801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.731831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.732195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.732227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.732585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.732617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.732974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.733005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.733347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.733388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.733736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.733767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.734136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.734193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.734587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.734618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.735045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.735076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.735434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.735467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.735834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.735864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.736232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.736264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.736625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.736654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.736886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.736920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.737271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.737304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.737663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.737694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.738053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.738085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.738450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.738484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.738846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.738877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.739237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.739271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.739654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.739686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.740049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.740080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.740443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.740476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.740835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.740866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.741223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.741257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.741678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.532 [2024-11-27 10:03:36.741709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.532 qpair failed and we were unable to recover it. 00:31:21.532 [2024-11-27 10:03:36.741943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.741978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.742336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.742369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.742616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.742649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.742997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.743027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.743390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.743423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.743779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.743812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.744177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.744210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.744569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.744601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.744953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.744985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.745348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.745380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.745756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.745787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.746149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.746203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.746581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.746615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.746976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.747008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.747347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.747379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.747737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.747770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.748126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.748156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.748526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.748557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.748917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.748965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.749316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.749348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.749720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.749750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.750073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.750104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.750473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.750506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.750864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.750898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.751246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.751278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.751648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.751679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.752048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.752079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.752442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.752474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.752805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.752836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.753197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.753231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.753591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.753621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.753979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.754010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.754351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.754383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.754752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.754783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.755145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.755193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.755568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.755598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.756034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.756067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.756433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.533 [2024-11-27 10:03:36.756467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.533 qpair failed and we were unable to recover it. 00:31:21.533 [2024-11-27 10:03:36.756824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.756853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.757206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.757237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.757644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.757676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.758027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.758059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.758412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.758444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.758795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.758826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.759192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.759224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.759586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.759625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.759974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.760008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.760345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.760377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.760745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.760776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.761134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.761178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.761548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.761579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.761936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.761968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.762323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.762355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.762726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.762757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.763119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.763150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.763514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.763547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.763897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.763927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.764289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.764322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.764696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.764730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.765118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.765150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.765545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.765576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.765936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.765967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.766280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.766312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.766701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.766731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.767100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.767133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.767547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.767579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.767929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.767961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.768316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.768349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.768709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.768741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.769106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.769139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.769550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.769581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.769933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.769964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.770325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.770357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.770758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.770789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.771147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.771193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.534 [2024-11-27 10:03:36.771467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.534 [2024-11-27 10:03:36.771501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.534 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.771857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.771888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.772247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.772281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.772634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.772666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.773065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.773096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.773448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.773483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.773839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.773870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.774238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.774271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.774645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.774676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.775026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.775058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.775423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.775461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.775856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.775890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.776229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.776261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.776631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.776662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.777019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.777049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.777421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.777454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.777806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.777839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.778197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.778230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.778624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.778656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.779013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.779044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.779405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.779437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.779762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.779795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.780136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.780216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.780603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.780634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.780994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.781025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.781389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.781420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.781780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.781809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.782179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.782212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.782565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.782597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.782953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.782985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.783341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.783374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.783713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.783746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.535 [2024-11-27 10:03:36.784103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.535 [2024-11-27 10:03:36.784133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.535 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.784543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.784576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.784947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.784979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.785381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.785414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.785777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.785810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.786198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.786230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.786589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.786621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.786972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.787005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.787388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.787420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.787769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.787800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.788174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.788208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.788565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.788594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.788952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.788984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.789340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.789374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.789733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.789764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.790127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.790169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.790555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.790586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.790956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.790987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.791345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.791383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.791741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.791773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.792123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.792154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.792566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.792597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.792951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.792982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.793345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.793378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.793739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.793772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.794132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.794173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.794568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.794600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.794890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.794921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.795281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.795312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.795665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.795699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.796057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.796089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.796444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.796477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.796832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.796863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.797131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.797183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.797543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.797573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.797745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.797778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.798130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.798177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.798545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.798576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.536 [2024-11-27 10:03:36.798931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.536 [2024-11-27 10:03:36.798962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.536 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.799314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.799347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.799705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.799736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.800093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.800126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.800484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.800517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.800843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.800875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.801231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.801264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.801668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.801699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.802056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.802086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.802447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.802479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.802721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.802757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.803111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.803142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.803510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.803542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.803886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.803917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.804280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.804313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.804666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.804698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.805060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.805092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.805468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.805502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.805896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.805927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.806288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.806321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.806660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.806697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.807096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.807129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.807509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.807542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.807896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.807928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.808283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.808316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.808675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.808705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.809063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.809095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.809457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.809489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.809839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.809871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.810235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.810270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.810650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.810682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.811038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.811070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.811302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.811335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.811705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.811738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.812095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.812129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.812523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.812557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.812792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.812827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.813200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.813232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.537 qpair failed and we were unable to recover it. 00:31:21.537 [2024-11-27 10:03:36.813627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.537 [2024-11-27 10:03:36.813659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.814013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.814046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.814415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.814448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.814809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.814842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.815079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.815113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.815550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.815582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.815904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.815939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.816294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.816328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.816674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.816706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.817061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.817094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.817426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.817457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.817815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.817847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.818203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.818234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.818626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.818659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.818991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.819023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.819389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.819421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.819800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.819832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.820189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.820222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.820583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.820615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.820982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.821015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.821390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.821424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.821817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.821848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.822219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.822258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.822617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.822649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.823012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.823044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.823412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.823443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.823799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.823830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.824198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.824231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.824583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.824615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.824973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.825006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.825340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.825374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.825738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.825770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.826133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.826174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.826504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.826533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.826902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.826932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.827296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.827328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.827729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.827762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.828111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.538 [2024-11-27 10:03:36.828142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.538 qpair failed and we were unable to recover it. 00:31:21.538 [2024-11-27 10:03:36.828564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.828595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.828943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.828976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.829338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.829369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.829741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.829774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.830131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.830174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.830553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.830584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.830938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.830969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.831319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.831351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.831714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.831745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.832101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.832134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.832521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.832553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.832910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.832943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.833308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.833341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.833665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.833697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.834053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.834086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.834452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.834485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.834846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.834877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.835234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.835268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.835629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.835660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.836024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.836055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.836318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.836349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.836709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.836740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.837095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.837126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.837406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.837440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.837836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.837874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.838256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.838289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.838708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.838740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.839087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.839119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.839516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.839548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.839911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.839942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.840295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.840326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.840695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.840725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.841180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.539 [2024-11-27 10:03:36.841213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.539 qpair failed and we were unable to recover it. 00:31:21.539 [2024-11-27 10:03:36.841569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.841599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.841955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.841985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.842354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.842387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.842615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.842649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.843002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.843037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.843398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.843432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.843786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.843818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.844180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.844212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.844584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.844614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.844971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.845001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.845303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.845334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.845721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.845753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.846104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.846135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.846524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.846557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.846913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.846945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.847308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.847340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.847704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.847736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.848093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.848127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.848522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.848554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.848909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.848941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.849189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.849220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.849542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.849572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.849929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.849961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.850331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.850363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.850734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.850766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.851122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.851154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.851551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.851583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.851939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.851970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.852340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.852372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.852737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.852769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.853134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.853177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.853498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.853535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.853894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.853926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.854283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.854315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.854553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.854583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.854958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.854990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.855350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.540 [2024-11-27 10:03:36.855384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.540 qpair failed and we were unable to recover it. 00:31:21.540 [2024-11-27 10:03:36.855771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.855803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.856206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.856240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.856630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.856664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.857021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.857052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.857409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.857443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.857827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.857858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.858261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.858294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.858528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.858562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.858978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.859011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.859345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.859376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.859739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.859771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.860123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.860153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.860521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.860553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.860906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.860935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.861283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.861317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.861685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.861717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.862070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.862100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.862462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.862493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.862872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.862904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.863138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.863179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.863565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.863595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.863955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.863988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.864235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.864270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.864683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.864715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.865072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.865105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.865499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.865531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.865882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.865915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.866276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.866310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.866667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.866698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.867054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.867084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.867375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.867407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.867787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.867817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.868184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.868218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.868584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.868617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.868966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.869003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.869354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.869386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.869752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.869782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.870143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.541 [2024-11-27 10:03:36.870189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.541 qpair failed and we were unable to recover it. 00:31:21.541 [2024-11-27 10:03:36.870538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.870572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.870931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.870966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.871328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.871362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.871719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.871751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.872108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.872139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.872512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.872546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.872804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.872837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.873182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.873214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.873531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.873561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.873920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.873950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.874319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.874353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.874589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.874620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.874990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.875022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.875400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.875435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.875651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1150e00 is same with the state(6) to be set 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Write completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Write completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Write completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Write completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Write completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Write completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Write completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Write completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Write completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Write completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Write completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 Read completed with error (sct=0, sc=8) 00:31:21.542 starting I/O failed 00:31:21.542 [2024-11-27 10:03:36.876585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:21.542 [2024-11-27 10:03:36.877065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.877128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.877636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.877745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.878190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.878232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.878628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.878659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.879056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.879089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.879559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.879669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.880103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.880143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.880560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.880594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.880948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.880981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.881404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.881511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.881983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.882023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.882418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.882453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.542 qpair failed and we were unable to recover it. 00:31:21.542 [2024-11-27 10:03:36.882850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.542 [2024-11-27 10:03:36.882882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.883237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.883270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.883638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.883667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.884070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.884102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.884511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.884544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.884871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.884903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.885238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.885271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.885652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.885683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.886054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.886086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.886442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.886474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.886831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.886861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.887224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.887257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.887625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.887657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.888020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.888052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.888395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.888429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.888822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.888855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.889216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.889255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.889676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.889707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.890056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.890087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.890472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.890506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.890749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.890779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.891129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.891169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.891539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.891571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.891923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.891954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.892323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.892355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.892709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.892741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.893120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.893150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.893526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.893558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.893916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.893946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.894304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.894336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.894696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.894729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.894978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.895007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.895373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.895405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.895659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.895691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.896042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.896072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.896443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.896477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.896831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.896863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.897222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.543 [2024-11-27 10:03:36.897254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.543 qpair failed and we were unable to recover it. 00:31:21.543 [2024-11-27 10:03:36.897655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.897686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.897924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.897960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.898344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.898376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.898734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.898765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.899127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.899167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.899584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.899615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.899965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.899997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.900347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.900379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.900735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.900765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.901117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.901150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.901516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.901548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.901925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.901956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.902321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.902355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.902712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.902743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.903108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.903139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.903502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.903534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.903895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.903925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.904281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.904311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.904672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.904710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.905066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.905098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.905459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.905491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.905736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.905765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.906113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.906144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.906525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.906555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.906920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.906952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.907312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.907345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.907694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.907725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.908085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.908115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.908521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.908554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.908908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.908940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.909368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.909400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.909744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.909773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.910131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.910168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.910536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.544 [2024-11-27 10:03:36.910567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.544 qpair failed and we were unable to recover it. 00:31:21.544 [2024-11-27 10:03:36.910925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.910957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.911311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.911341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.911702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.911733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.912089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.912121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.912516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.912548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.912907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.912939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.913296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.913329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.913687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.913718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.913956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.913990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.914339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.914372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.914726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.914757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.915148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.915190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.915466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.915498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.915857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.915888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.916254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.916285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.916660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.916691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.917047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.917079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.917454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.917484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.917844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.917873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.918233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.918266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.918674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.918705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.919060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.919090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.919435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.919467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.919823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.919857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.920215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.920253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.920615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.920644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.921009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.921044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.921336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.921369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.921743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.921775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.922134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.922175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.922532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.922564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.922913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.922943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.923303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.923337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.923586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.923615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.923975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.924006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.924377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.924410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.924778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.924810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.545 [2024-11-27 10:03:36.925148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.545 [2024-11-27 10:03:36.925189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.545 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.925556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.925588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.925945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.925977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.926227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.926259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.926619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.926650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.926938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.926969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.927338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.927370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.927733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.927764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.928178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.928211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.928573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.928604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.928995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.929027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.929388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.929422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.929776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.929806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.930177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.930211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.930591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.930624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.930983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.931014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.931382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.931417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.931775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.931806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.932177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.932211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.932586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.932617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.932979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.933011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.933388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.933421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.933779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.933810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.934177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.934209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.934575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.934604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.934837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.934868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.935229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.935263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.935630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.935667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.936017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.936048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.936421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.936453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.936810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.936840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.937199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.937233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.937617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.937650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.938007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.938037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.938397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.938428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.938785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.938818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.939180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.939214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.939579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.939611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.546 qpair failed and we were unable to recover it. 00:31:21.546 [2024-11-27 10:03:36.939968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.546 [2024-11-27 10:03:36.940000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.940290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.940325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.940709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.940739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.941105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.941139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.941559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.941591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.941952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.941983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.942336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.942371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.942732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.942762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.943122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.943155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.943553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.943585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.943951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.943984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.944349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.944381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.944744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.944774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.945140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.945190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.945586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.945618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.945964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.945995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.946336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.946372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.946724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.946756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.947114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.947145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.947560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.947592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.947903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.947934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.948296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.948330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.948594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.948623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.948916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.948947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.949317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.949347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.949694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.949725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.950089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.950122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.950455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.950487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.950841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.950874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.951269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.951310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.951659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.951690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.951953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.951982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.952362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.952397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.952766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.952799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.953149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.953208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.953613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.953647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.954017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.954049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.954406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.954438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.547 qpair failed and we were unable to recover it. 00:31:21.547 [2024-11-27 10:03:36.954794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.547 [2024-11-27 10:03:36.954827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.955065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.955102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.955494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.955526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.955886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.955919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.956258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.956290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.956662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.956696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.957057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.957087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.957448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.957480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.957815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.957847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.958191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.958223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.958599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.958632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.958986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.959018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.959270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.959307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.959665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.959696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.960070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.960104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.960521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.960554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.960829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.960863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.961222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.961258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.961650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.961683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.962035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.962070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.962434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.962466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.962808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.962840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.963197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.963231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.963602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.963632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.963989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.964022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.964392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.964428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.964776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.964806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.965183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.965217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.965565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.965596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.965883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.965914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.966270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.966301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.966698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.966737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.967093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.967123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.967461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.967493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.967863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.967895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.968234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.968270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.968611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.968643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.969007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.969039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.548 qpair failed and we were unable to recover it. 00:31:21.548 [2024-11-27 10:03:36.969425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.548 [2024-11-27 10:03:36.969457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.969817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.969848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.970596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.970646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.971090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.971129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.971516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.971551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.971879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.971910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.972263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.972299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.972719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.972750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.973110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.973144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.973496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.973530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.973893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.973924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.974281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.974313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.974711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.974743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.975106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.975141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.975504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.975535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.975781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.975814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.976226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.976258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.976625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.976655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.977034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.977068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.977449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.977484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.977880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.977912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.549 [2024-11-27 10:03:36.978305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.549 [2024-11-27 10:03:36.978341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.549 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.978699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.978737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.979120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.979155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.979556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.979589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.979951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.979982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.980352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.980383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.980749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.980779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.981192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.981225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.981491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.981522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.981871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.981903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.982273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.982308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.982667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.982697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.982946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.982989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.983381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.983416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.983768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.983800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.984046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.824 [2024-11-27 10:03:36.984083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.824 qpair failed and we were unable to recover it. 00:31:21.824 [2024-11-27 10:03:36.984424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.984460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.984814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.984845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.985201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.985242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.985586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.985621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.985978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.986011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.986362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.986395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.986753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.986788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.987137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.987202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.987575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.987607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.987962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.987993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.988342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.988377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.988631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.988664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.989015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.989045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.989418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.989452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.989815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.989846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.990220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.990251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.990625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.990656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.991027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.991056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.991299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.991329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.991696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.991727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.992077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.992108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.992523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.992556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.992904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.992934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.993313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.993346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.993597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.993627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.993988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.994018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.994369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.994400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.994757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.994788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.995141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.995183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.995548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.995578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.995937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.995969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.996332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.996364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.996724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.996755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.997124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.997156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.997563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.997594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.997956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.997988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.998261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.998300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.998682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.825 [2024-11-27 10:03:36.998715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.825 qpair failed and we were unable to recover it. 00:31:21.825 [2024-11-27 10:03:36.999069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:36.999100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:36.999539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:36.999571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:36.999928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:36.999959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.000323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.000356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.000726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.000757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.001094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.001126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.001512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.001545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.002474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.002529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.002920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.002966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.003354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.003385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.003734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.003763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.004127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.004172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.004580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.004610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.004963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.004994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.005345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.005378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.005742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.005772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.006135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.006178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.006435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.006464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.006828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.006857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.007180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.007209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.007594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.007623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.007859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.007887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.008235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.008266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.008712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.008742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.008992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.009020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.009394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.009431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.009674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.009706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.009959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.009988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.010358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.010387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.010750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.010778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.011144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.011186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.011542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.011571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.011942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.011970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.012321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.012353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.012761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.012789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.013144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.013192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.013533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.013562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.013931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.826 [2024-11-27 10:03:37.013960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.826 qpair failed and we were unable to recover it. 00:31:21.826 [2024-11-27 10:03:37.014322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.014351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.014715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.014745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.015114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.015142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.015507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.015537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.015889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.015917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.016280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.016312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.016687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.016716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.017086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.017114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.017478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.017507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.017871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.017900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.018276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.018306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.018660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.018688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.019039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.019068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.019430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.019460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.019822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.019851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.020096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.020127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.020467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.020498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.020762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.020794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.021150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.021193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.021531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.021566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.021853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.021881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.022128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.022156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.022526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.022555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.022935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.022963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.023325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.023355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.023766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.023797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.024151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.024191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.024529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.024565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.024921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.024950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.025316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.025347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.025715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.025744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.026110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.026139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.026502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.026531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.026898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.026926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.027296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.027327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.027708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.027736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.028098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.028128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.028490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.827 [2024-11-27 10:03:37.028520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.827 qpair failed and we were unable to recover it. 00:31:21.827 [2024-11-27 10:03:37.028882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.028911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.029153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.029199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.029585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.029614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.029859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.029891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.030239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.030269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.030641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.030671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.030911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.030943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.031305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.031334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.031697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.031726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.032097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.032125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.032487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.032516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.032858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.032887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.033250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.033281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.033657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.033685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.034063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.034091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.034455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.034484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.034823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.034852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.035250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.035281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.035656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.035685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.036065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.036093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.036442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.036472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.036822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.036850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.037219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.037250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.037628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.037656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.038015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.038045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.038417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.038446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.038816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.038845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.039214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.039245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.039513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.039541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.039985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.040020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.040381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.040420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.040674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.040705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.040953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.040982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.041376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.041408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.041749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.041779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.042118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.042146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.828 qpair failed and we were unable to recover it. 00:31:21.828 [2024-11-27 10:03:37.042511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.828 [2024-11-27 10:03:37.042540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.042909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.042938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.043306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.043337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.043678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.043706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.044039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.044069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.044428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.044459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.044831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.044859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.045108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.045136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.045506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.045535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.045902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.045931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.046268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.046297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.046651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.046680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.047054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.047082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.047472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.047501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.047734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.047766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.048125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.048153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.048505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.048535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.048808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.048837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.049199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.049230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.049583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.049613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.050040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.050068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.050329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.050358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.050743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.050773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.051138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.051174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.051524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.051552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.051895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.051924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.052278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.052309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.052649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.052677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.053040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.053069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.053443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.053473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.053836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.053865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.054235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.829 [2024-11-27 10:03:37.054265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.829 qpair failed and we were unable to recover it. 00:31:21.829 [2024-11-27 10:03:37.054520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.054552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.054923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.054960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.055318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.055349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.055710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.055738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.056104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.056131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.056515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.056545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.056914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.056943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.057301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.057330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.057689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.057718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.058072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.058102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.058456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.058486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.058863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.058892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.059232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.059262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.059636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.059664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.060029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.060059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.060425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.060455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.060817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.060846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.061208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.061239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.061600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.061628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.061993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.062021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.062390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.062419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.062713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.062741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.063111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.063139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.063506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.063535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.063887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.063915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.064276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.064306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.064537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.064570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.064945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.064973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.065329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.065359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.065728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.065757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.066116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.066144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.066519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.066548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.066801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.066828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.067200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.067230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.067599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.067629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.067993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.068021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.068389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.830 [2024-11-27 10:03:37.068419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.830 qpair failed and we were unable to recover it. 00:31:21.830 [2024-11-27 10:03:37.068665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.068697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.068947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.068975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.069369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.069399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.069775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.069804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.070176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.070212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.070574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.070603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.070976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.071006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.071390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.071419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.071774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.071803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.072173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.072202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.072598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.072625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.072988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.073017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.073392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.073429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.073762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.073791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.074144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.074183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.074588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.074616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.074968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.074996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.075351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.075381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.075755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.075784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.076152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.076190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.076543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.076571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.076910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.076938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.077233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.077262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.077637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.077667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.077909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.077940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.078331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.078361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.078706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.078734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.079076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.079104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.079469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.079500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.079861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.079889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.080235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.080265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.080698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.080726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.081059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.081089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.081452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.081483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.081843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.081873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.082217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.082247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.082606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.082635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.831 qpair failed and we were unable to recover it. 00:31:21.831 [2024-11-27 10:03:37.082989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.831 [2024-11-27 10:03:37.083017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.083391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.083420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.083791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.083820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.084178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.084207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.084466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.084498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.084884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.084913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.085237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.085267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.085673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.085707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.085948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.085976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.086325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.086355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.086709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.086739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.087078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.087106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.087396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.087427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.087771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.087799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.088171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.088202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.088557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.088587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.088940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.088971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.089219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.089253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.089639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.089668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.090017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.090045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.090386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.090416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.090760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.090789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.091150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.091188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.091548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.091578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.091922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.091950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.092332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.092361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.092733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.092760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.093121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.093149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.093465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.093502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.093864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.093892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.094144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.094180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.094501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.094529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.094890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.094919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.095278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.095307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.095618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.095645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.096022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.096050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.096388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.096419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.832 qpair failed and we were unable to recover it. 00:31:21.832 [2024-11-27 10:03:37.096799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.832 [2024-11-27 10:03:37.096826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.097193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.097224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.097465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.097497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.097862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.097892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.098226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.098256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.098620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.098648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.099075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.099104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.099462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.099492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.099859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.099886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.100129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.100169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.100536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.100572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.100908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.100935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.101293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.101324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.101696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.101724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.102085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.102113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.102475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.102505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.102869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.102897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.103263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.103294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.103691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.103719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.104076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.104105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.104468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.104498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.104872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.104900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.105267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.105298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.105654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.105683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.106050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.106078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.106443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.106472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.106830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.106858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.107219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.107248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.107473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.107504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.107883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.107912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.108281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.108312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.108682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.108710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.833 [2024-11-27 10:03:37.109078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.833 [2024-11-27 10:03:37.109106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.833 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.109443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.109472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.109876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.109904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.110264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.110294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.110661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.110689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.111052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.111080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.111422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.111452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.111783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.111811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.112179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.112209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.112591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.112619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.112989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.113018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.113434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.113463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.113867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.113895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.114245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.114274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.114623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.114651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.114949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.114977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.115342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.115372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.115736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.115764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.116138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.116180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.116533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.116561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.116927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.116956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.117310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.117339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.117702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.117730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.118103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.118131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.118497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.118526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.118879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.118907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.119271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.119302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.119646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.119674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.120104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.120132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.120459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.120489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.120859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.120887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.121260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.121291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.121735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.121763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.122092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.122121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.122362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.122395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.122754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.122781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.123151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.123189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.834 [2024-11-27 10:03:37.123539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.834 [2024-11-27 10:03:37.123568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.834 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.123908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.123936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.124305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.124337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.124691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.124719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.125075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.125104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.125466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.125495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.125859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.125887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.126256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.126286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.126690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.126719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.127068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.127095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.127465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.127495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.127860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.127889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.128251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.128280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.128645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.128673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.129033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.129060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.129425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.129454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.129825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.129853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.130223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.130253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.130609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.130637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.130994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.131023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.131365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.131396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.131739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.131774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.132131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.132167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.132594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.132623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.132989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.133018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.133390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.133420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.133792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.133820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.134191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.134220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.134603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.134631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.134997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.135025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.135359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.135388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.135764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.135792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.136170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.136200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.136459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.136487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.136844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.136872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.137244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.137275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.137654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.137683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.138099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.835 [2024-11-27 10:03:37.138126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.835 qpair failed and we were unable to recover it. 00:31:21.835 [2024-11-27 10:03:37.138496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.138526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.138886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.138914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.139270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.139299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.139668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.139695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.140068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.140104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.140494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.140524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.140835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.140862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.141239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.141269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.141638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.141667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.142025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.142053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.142416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.142448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.142788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.142816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.143172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.143201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.143495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.143523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.143920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.143948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.144206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.144236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.144601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.144629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.144962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.144991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.145340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.145370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.145733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.145762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.146128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.146156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.146558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.146586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.146952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.146987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.147363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.147398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.147736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.147771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.148135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.148171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.148526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.148554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.148928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.148956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.149319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.149349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.149714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.149743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.150155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.150204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.150567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.150596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.150956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.150983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.151224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.151253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.151617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.151646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.151896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.151923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.152294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.152323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.152690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.836 [2024-11-27 10:03:37.152718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.836 qpair failed and we were unable to recover it. 00:31:21.836 [2024-11-27 10:03:37.153082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.153110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.153461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.153490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.153832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.153862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.154213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.154244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.154600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.154629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.154998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.155026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.155396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.155426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.155788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.155817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.156076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.156104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.156418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.156447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.156815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.156843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.157208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.157239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.157627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.157655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.158016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.158044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.158493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.158525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.158887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.158915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.159276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.159306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.159648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.159676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.160064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.160092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.160459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.160489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.160850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.160878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.161240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.161289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.161636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.161665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.162025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.162054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.162294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.162327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.162677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.162713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.163074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.163102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.163472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.163501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.163855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.163883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.164250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.164280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.164697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.164725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.165021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.165048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.165405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.165435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.165688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.165720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.166105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.166135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.166433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.166465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.166859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.166888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.167254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.837 [2024-11-27 10:03:37.167285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.837 qpair failed and we were unable to recover it. 00:31:21.837 [2024-11-27 10:03:37.167676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.167705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.168065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.168094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.168457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.168488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.168855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.168883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.169286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.169317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.169685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.169712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.170070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.170100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.170475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.170506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.170860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.170887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.171239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.171269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.171622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.171651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.172022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.172050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.172401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.172432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.172817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.172846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.173286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.173317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.173667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.173695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.174061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.174090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.174528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.174559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.174920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.174949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.175316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.175345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.175585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.175617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.175987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.176015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.176387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.176418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.176772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.176801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.177167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.177198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.177563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.177592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.177959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.177988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.178354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.178390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.178790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.178820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.179178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.179207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.179577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.179605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.179979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.180009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.180257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.838 [2024-11-27 10:03:37.180286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.838 qpair failed and we were unable to recover it. 00:31:21.838 [2024-11-27 10:03:37.180658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.180687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.181050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.181080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.181374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.181409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.181752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.181780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.182156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.182207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.182583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.182613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.182982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.183013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.183270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.183299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.183679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.183710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.184060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.184091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.184484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.184514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.184855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.184885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.185235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.185264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.185619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.185649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.186002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.186030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.186384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.186417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.186780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.186809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.187180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.187212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.187590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.187619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.187978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.188008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.188278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.188310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.188679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.188708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.188961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.188993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.189380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.189412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.189778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.189808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.190201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.190236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.190581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.190610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.190998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.191028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.191382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.191413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.191774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.191803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.192154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.192215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.192614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.192645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.192857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.192886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.193264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.193295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.193663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.193698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.194040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.194069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.194425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.194458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.194822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.839 [2024-11-27 10:03:37.194852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.839 qpair failed and we were unable to recover it. 00:31:21.839 [2024-11-27 10:03:37.195215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.195246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.195625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.195654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.196012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.196042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.196390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.196419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.196689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.196719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.197055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.197084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.197448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.197478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.197840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.197871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.198238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.198272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.198636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.198666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.199046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.199075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.199420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.199453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.199806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.199836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.200199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.200232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.200610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.200640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.200881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.200914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.201281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.201313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.201684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.201714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.201979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.202011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.202418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.202450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.202881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.202910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.203270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.203301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.203642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.203672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.204041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.204078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.204328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.204363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.204722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.204752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.205113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.205143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.205498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.205528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.205909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.205938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.206285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.206316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.206687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.206721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.206988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.207021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.207409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.207438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.207798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.207831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.208193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.208226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.208619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.208649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.209015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.209045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.840 [2024-11-27 10:03:37.209424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.840 [2024-11-27 10:03:37.209456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.840 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.209808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.209839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.210184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.210214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.210655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.210686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.211039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.211070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.211441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.211471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.211718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.211751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.212103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.212133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.212497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.212528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.212893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.212923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.213199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.213229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.213642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.213673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.214042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.214071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.214455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.214488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.214741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.214772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.215129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.215172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.215535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.215567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.215942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.215974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.216227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.216259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.216646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.216676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.217019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.217051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.217499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.217530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.217913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.217944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.218341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.218371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.218729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.218758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.219131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.219170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.219531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.219568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.219920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.219950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.220321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.220351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.220710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.220741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.221103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.221135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.221523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.221553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.221828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.221857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.841 qpair failed and we were unable to recover it. 00:31:21.841 [2024-11-27 10:03:37.222294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.841 [2024-11-27 10:03:37.222327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.222693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.222725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.223084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.223116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.223497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.223531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.223882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.223911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.224285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.224316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.224555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.224588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.224949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.224979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.225380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.225411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.225772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.225801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.226182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.226213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.226586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.226615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.226981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.227010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.227380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.227411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.227780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.227808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.228180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.228210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.228573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.228602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.228956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.228985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.229325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.229355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.229606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.229638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.230010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.230040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.230391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.230423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.230790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.230819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.231029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.231058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.231476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.231507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.231733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.231765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.232127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.232167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.232517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.232547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.232976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.233006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.233467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.233497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.233850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.233880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.234241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.234271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.234617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.234647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.235013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.235054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.235388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.235417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.235779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.235808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.236177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.236208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.236567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.842 [2024-11-27 10:03:37.236594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.842 qpair failed and we were unable to recover it. 00:31:21.842 [2024-11-27 10:03:37.236956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.236984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.237357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.237387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.237762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.237790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.238001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.238032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.238398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.238428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.238798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.238826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.239196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.239225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.239632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.239660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.240024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.240053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.240422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.240452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.240808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.240836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.241203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.241234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.241610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.241639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.241983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.242012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.242364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.242394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.242779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.242808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.243191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.243221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.243554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.243582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.243915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.243946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.244208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.244238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.244582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.244612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.245020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.245048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.245438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.245469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.245837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.245865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.246234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.246263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.246633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.246663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.843 qpair failed and we were unable to recover it. 00:31:21.843 [2024-11-27 10:03:37.247001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.843 [2024-11-27 10:03:37.247029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.247394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.247424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.247783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.247811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.248178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.248208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.248570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.248598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.248967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.248996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.249374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.249404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.249794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.249822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.250183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.250212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.250599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.250634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.251062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.251091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.251425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.251455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.251823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.251853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.252095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.252124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.252519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.252550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.252941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.252971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.253342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.253373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.253771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.253799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.254156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.254194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.254596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.254624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.254990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.255018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.255395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.255424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.255779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.255807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.256179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.256211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.256605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.256633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.256856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.256887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.257252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.257283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.257639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.257668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.258032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.258060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.258412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.258443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.258791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.258819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.259192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.259222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.259577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.259606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.260013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.260042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.260392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.260422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.260672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.260700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.261124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.261153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.261494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.261523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.844 qpair failed and we were unable to recover it. 00:31:21.844 [2024-11-27 10:03:37.261879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.844 [2024-11-27 10:03:37.261908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.262144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.262187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.262539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.262568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.262815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.262846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.263218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.263248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.263589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.263626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.263980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.264008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.264394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.264424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.264787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.264815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.265178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.265208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.265620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.265650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.266010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.266045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.266387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.266417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.266791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.266819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.267184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.267215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.267594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.267622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.267988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.268017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.268388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.268417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.268648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.268679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.269081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.269110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.269421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.269451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.269788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.269817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.270185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.270217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.270565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.270594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.270959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.270987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.271363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.271394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.271750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.271779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.272135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.272173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.272495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.272523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.272875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.272903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.273270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.273300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.273684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.273712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.274073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.274102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.274465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.845 [2024-11-27 10:03:37.274494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.845 qpair failed and we were unable to recover it. 00:31:21.845 [2024-11-27 10:03:37.274859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.846 [2024-11-27 10:03:37.274887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.846 qpair failed and we were unable to recover it. 00:31:21.846 [2024-11-27 10:03:37.275231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.846 [2024-11-27 10:03:37.275261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.846 qpair failed and we were unable to recover it. 00:31:21.846 [2024-11-27 10:03:37.275633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.846 [2024-11-27 10:03:37.275662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:21.846 qpair failed and we were unable to recover it. 00:31:21.846 [2024-11-27 10:03:37.276032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.111 [2024-11-27 10:03:37.276060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.111 qpair failed and we were unable to recover it. 00:31:22.111 [2024-11-27 10:03:37.276398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.111 [2024-11-27 10:03:37.276432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.111 qpair failed and we were unable to recover it. 00:31:22.111 [2024-11-27 10:03:37.276814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.111 [2024-11-27 10:03:37.276842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.111 qpair failed and we were unable to recover it. 00:31:22.111 [2024-11-27 10:03:37.277204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.111 [2024-11-27 10:03:37.277233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.111 qpair failed and we were unable to recover it. 00:31:22.111 [2024-11-27 10:03:37.277675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.111 [2024-11-27 10:03:37.277704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.111 qpair failed and we were unable to recover it. 00:31:22.111 [2024-11-27 10:03:37.278048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.111 [2024-11-27 10:03:37.278076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.111 qpair failed and we were unable to recover it. 00:31:22.111 [2024-11-27 10:03:37.278445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.111 [2024-11-27 10:03:37.278473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.111 qpair failed and we were unable to recover it. 00:31:22.111 [2024-11-27 10:03:37.278848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.111 [2024-11-27 10:03:37.278877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.111 qpair failed and we were unable to recover it. 00:31:22.111 [2024-11-27 10:03:37.279174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.111 [2024-11-27 10:03:37.279204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.111 qpair failed and we were unable to recover it. 00:31:22.111 [2024-11-27 10:03:37.279555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.111 [2024-11-27 10:03:37.279583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.111 qpair failed and we were unable to recover it. 00:31:22.111 [2024-11-27 10:03:37.279952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.111 [2024-11-27 10:03:37.279980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.111 qpair failed and we were unable to recover it. 00:31:22.111 [2024-11-27 10:03:37.280341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.111 [2024-11-27 10:03:37.280370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.111 qpair failed and we were unable to recover it. 00:31:22.111 [2024-11-27 10:03:37.280726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.111 [2024-11-27 10:03:37.280752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.111 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.281125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.281152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.281522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.281562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.281865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.281893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.282232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.282261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.282631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.282659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.283025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.283051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.283299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.283331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.283692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.283721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.284077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.284105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.284473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.284502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.284860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.284888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.285258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.285287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.285687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.285715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.285955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.285986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.286347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.286378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.286631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.286661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.287014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.287043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.287422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.287451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.287821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.287848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.288236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.288265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.288517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.288544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.288884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.288912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.289270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.289298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.289678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.289705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.290101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.290130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.290521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.290550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.290905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.290934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.291280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.291312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.291614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.291643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.291999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.292027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.292396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.292427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.292788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.292816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.293196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.293227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.293591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.293622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.293983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.294011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.294381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.112 [2024-11-27 10:03:37.294412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.112 qpair failed and we were unable to recover it. 00:31:22.112 [2024-11-27 10:03:37.294762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.294792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.295134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.295175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.295518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.295546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.295911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.295941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.296302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.296331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.296701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.296734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.296974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.297005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.297382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.297412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.297770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.297797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.298182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.298212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.298576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.298604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.298971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.298998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.299337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.299369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.299734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.299763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.300118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.300146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.300416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.300446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.300781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.300808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.301193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.301221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.301583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.301613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.301975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.302005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.302260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.302294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.302517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.302549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.302893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.302923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.303279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.303311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.303693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.303720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.304124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.304152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.304538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.304566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.304939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.304968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.305336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.305364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.305717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.305745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.306135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.306174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.306524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.306552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.306928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.306956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.307177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.307206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.307573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.307601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.307952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.307980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.113 [2024-11-27 10:03:37.308341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.113 [2024-11-27 10:03:37.308370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.113 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.308775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.308803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.309260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.309289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.309680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.309709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.310103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.310131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.310511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.310540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.310953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.310981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.311380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.311412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.311651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.311678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.312029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.312071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.312462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.312493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.312853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.312881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.313280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.313309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.313673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.313701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.314069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.314097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.314470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.314499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.314901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.314930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.315295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.315324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.315694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.315722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.316116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.316144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.316501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.316530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.316890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.316917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.317283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.317312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.317687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.317715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.318072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.318100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.318469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.318498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.318731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.318758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.319133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.319171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.319577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.319604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.319961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.319989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.320335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.320364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.320732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.320760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.321126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.321153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.321509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.321538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.321916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.321943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.322308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.322337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.114 [2024-11-27 10:03:37.322695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.114 [2024-11-27 10:03:37.322724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.114 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.323084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.323112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.323495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.323524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.323882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.323912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.324271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.324300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.324668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.324697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.325059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.325088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.325462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.325490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.325854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.325881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.326300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.326330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.326692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.326720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.327168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.327197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.327551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.327579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.327815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.327850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.328214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.328243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.328616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.328644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.329008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.329036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.329414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.329443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.329799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.329826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.330185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.330214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.330580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.330608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.330981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.331009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.331382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.331412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.331789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.549734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.550232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.550274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.550667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.550698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.550965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.550994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.551228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.551287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.551717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.551746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.552097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.552126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.552596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.552628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.552980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.553010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.553281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.553312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.553709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.553738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.554097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.554126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.554512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.554543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.554942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.115 [2024-11-27 10:03:37.554973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.115 qpair failed and we were unable to recover it. 00:31:22.115 [2024-11-27 10:03:37.555194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.555226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.555588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.555620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.555987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.556016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.556396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.556427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.556834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.556865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.557220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.557251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.557628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.557658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.558017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.558047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.558401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.558433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.558795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.558823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.559103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.559132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.559494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.559524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.559893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.559921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.560271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.560301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.560685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.560714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.561060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.561089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.561457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.561496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.561851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.561880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.562238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.562268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.562656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.562686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.563032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.563060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.563404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.563434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.563803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.563833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.564075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.564103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.564466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.564496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.564865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.564895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.565232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.565261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.565533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.565562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.565834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.565863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.566216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.566246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.116 [2024-11-27 10:03:37.566634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.116 [2024-11-27 10:03:37.566663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.116 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.567034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.567064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.567413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.567442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.567815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.567843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.568088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.568123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.568517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.568546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.568889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.568917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.569276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.569307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.569658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.569686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.570051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.570081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.570444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.570475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.570815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.570843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.571260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.571290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.571574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.571603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.571970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.571999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.117 [2024-11-27 10:03:37.572382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.117 [2024-11-27 10:03:37.572412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.117 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.572735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.572766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.573023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.573056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.573450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.573481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.573836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.573865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.574242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.574272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.574684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.574713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.575072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.575100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.575468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.575499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.575860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.575889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.576155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.576197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.576623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.576660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.577023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.577051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.577441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.577472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.577829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.577858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.578033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.578065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.578426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.390 [2024-11-27 10:03:37.578457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.390 qpair failed and we were unable to recover it. 00:31:22.390 [2024-11-27 10:03:37.578824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.578853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.579222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.579251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.579600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.579629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.579991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.580021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.580269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.580299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.580703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.580732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.581104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.581133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.581503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.581532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.581895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.581925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.582288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.582319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.582663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.582692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.583056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.583085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.583346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.583377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.583727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.583756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.584169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.584199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.584556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.584584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.584956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.584984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.585329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.585359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.585742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.585771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.586122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.586151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.586522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.586551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.586918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.586949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.587313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.587344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.587696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.587728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.588131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.588171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.588520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.588550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.588903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.588932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.589290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.589321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.589696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.589726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.590077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.590106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.590509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.590541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.590898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.590928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.591208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.591240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.591587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.591617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.591983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.592020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.592426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.592457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.592814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.592843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.391 [2024-11-27 10:03:37.593248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.391 [2024-11-27 10:03:37.593278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.391 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.593653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.593682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.594038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.594067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa21c000b90 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.594612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.594723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.595185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.595226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.595581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.595612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.595929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.595959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.597130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.597189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.597466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.597497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.597877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.597907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.598252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.598285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.598629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.598659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.599017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.599047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.599388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.599421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.599787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.599817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.600068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.600102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.600480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.600512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.600872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.600901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.601253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.601286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.601662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.601692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.602015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.602045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.602384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.602413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.602752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.602782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.603185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.603216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.603556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.603593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.603842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.603870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.604234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.604265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.604520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.604548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.604914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.604942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.605224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.605254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.605639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.605667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.606006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.606034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.606383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.606414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.606668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.606695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.607048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.607077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.607417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.607451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.607812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.607841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.608207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.608237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.608594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.392 [2024-11-27 10:03:37.608622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.392 qpair failed and we were unable to recover it. 00:31:22.392 [2024-11-27 10:03:37.608967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.608995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.609334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.609365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.609716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.609745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.610081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.610111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.610477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.610507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.610881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.610910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.611238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.611268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.611636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.611665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.611899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.611931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.612284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.612315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.612648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.612676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.613037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.613066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.613425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.613454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.613705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.613732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.614089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.614117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.614484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.614516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.614875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.614903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.615264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.615295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.615617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.615646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.616002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.616030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.616379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.616410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.616780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.616808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.617191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.617221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.617588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.617616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.617976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.618004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.618361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.618392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.618748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.618785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.619130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.619170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.619529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.619558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.619890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.619919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.620273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.620303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.620657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.620686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.621055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.621084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.621353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.621385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.621739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.621770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.622105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.622135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.622514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.622544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.393 [2024-11-27 10:03:37.622917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.393 [2024-11-27 10:03:37.622946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.393 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.623300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.623329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.623709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.623737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.624123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.624151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.624536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.624566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.624919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.624947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.625204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.625234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.625576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.625604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.625949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.625978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.626336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.626366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.626751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.626779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.627152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.627193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.627468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.627498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.627850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.627888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.628236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.628265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.628638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.628667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.629008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.629044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.629311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.629342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.629745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.629774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.630093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.630124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.630541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.630572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.630931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.630961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.631350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.631380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.631737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.631766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.632090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.632119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.632490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.632522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.632866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.632894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.633185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.633216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.633594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.633624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.633974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.634003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.634331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.634363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.634733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.634762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.635102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.635132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.635463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.635493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.635879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.635907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.636260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.636290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.636614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.636642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.636885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.636913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.637252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.394 [2024-11-27 10:03:37.637282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.394 qpair failed and we were unable to recover it. 00:31:22.394 [2024-11-27 10:03:37.637615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.637645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.637980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.638008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.638343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.638373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.638748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.638777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.639153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.639291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.639666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.639695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.639978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.640007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.640280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.640311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.640662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.640691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.640943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.640972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.641317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.641347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.641668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.641698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.642061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.642091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.642422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.642452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.642829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.642858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.643215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.643244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.643614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.643641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.643995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.644024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.644389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.644419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.644764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.644793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.645134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.645174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.645495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.645523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.645900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.645929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.646294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.646324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.646734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.646762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.647114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.647142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.647483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.647513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.647873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.647901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.648207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.648236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.648607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.648635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.649009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.649037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.649413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.395 [2024-11-27 10:03:37.649446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.395 qpair failed and we were unable to recover it. 00:31:22.395 [2024-11-27 10:03:37.649804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.649833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.650176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.650210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.650563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.650594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.650965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.650994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.651370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.651401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.651733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.651764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.652113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.652141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.652533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.652563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.652893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.652923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.653291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.653321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.653702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.653731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.654050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.654078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.654437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.654467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.654808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.654838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.655178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.655209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.655568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.655597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.655962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.655989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.656368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.656399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.656753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.656783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.657110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.657140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.657514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.657544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.657872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.657902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.658273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.658305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.658662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.658692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.659056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.659085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.659423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.659455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.659812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.659841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.660205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.660236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.660601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.660629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.660982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.661012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.661423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.661453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.661819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.661850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.662217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.662247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.662625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.662654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.662968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.662999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.663372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.663402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.663729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.663759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.664114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.664142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.664529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.396 [2024-11-27 10:03:37.664558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.396 qpair failed and we were unable to recover it. 00:31:22.396 [2024-11-27 10:03:37.664926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.664954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.665312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.665348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.665705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.665736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.666094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.666124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.666492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.666523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.666885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.666914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.667270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.667300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.667622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.667653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.667991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.668021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.668332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.668363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.668707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.668736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.669094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.669125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.669514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.669546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.669917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.669949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.670201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.670233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.670526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.670556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.670946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.670975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.671317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.671348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.671707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.671736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.672092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.672123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.672523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.672554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.672911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.672941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.673293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.673327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.673700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.673730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.673970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.673999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.674353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.674386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.674733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.674762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.675047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.675077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.675339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.675381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.675728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.675757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.676039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.676068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.676393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.676424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.676768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.676797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.677168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.677199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.677572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.677601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.677946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.677975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.678333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.678363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.678762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.678790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.397 qpair failed and we were unable to recover it. 00:31:22.397 [2024-11-27 10:03:37.679151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.397 [2024-11-27 10:03:37.679191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.679528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.679559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.679934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.679963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.680337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.680367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.680765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.680795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.681146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.681185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.681482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.681512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.681839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.681869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.682209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.682239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.682637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.682666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.683022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.683049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.683402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.683432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.683775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.683804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.684180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.684210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.684533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.684563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.684894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.684921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.685274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.685305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.685665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.685699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.686042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.686070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.686398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.686429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.686755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.686785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.687136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.687177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.687416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.687443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.687793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.687821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.688182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.688211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.688598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.688627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.688940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.688970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.689328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.689362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.689740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.689768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.690131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.690172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.690507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.690535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.690914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.690943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.691320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.691352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.691727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.691756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.692086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.692117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.692500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.692531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.692889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.692918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.693149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.693192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.398 [2024-11-27 10:03:37.693594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.398 [2024-11-27 10:03:37.693623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.398 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.693984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.694012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.694292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.694323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.694704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.694732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.695104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.695133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.695405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.695438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.695808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.695837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.696193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.696223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.696617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.696646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.696972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.697003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.697336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.697365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.697722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.697752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.698101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.698132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.698526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.698556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.698938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.698968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.699222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.699253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.699600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.699630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.699984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.700013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.700402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.700432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.700774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.700802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.701140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.701186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.701545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.701573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.701927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.701955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.702337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.702367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.702780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.702809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.703175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.703205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.703452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.703480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.703842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.703870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.704217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.704248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.704619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.704647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.704984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.705013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.705345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.705375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.705760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.705788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.706147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.706195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.399 [2024-11-27 10:03:37.706570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.399 [2024-11-27 10:03:37.706599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.399 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.706952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.706982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.707356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.707387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.707746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.707774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.708042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.708069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.708431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.708462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.708805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.708835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.709206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.709236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.709630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.709658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.710024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.710053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.710457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.710487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.710837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.710865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.711224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.711255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.711622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.711657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.711980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.712008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.712381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.712411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.712762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.712790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.713151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.713190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.713491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.713519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.713881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.713910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.714152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.714199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.714612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.714640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.715006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.715034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.715384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.715413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.715673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.715705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.716047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.716076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.716431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.716461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.716826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.716855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.717218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.717247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.717565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.717594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.717977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.718006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.718333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.718362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.718737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.718766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.719104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.719135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.719523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.719552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.719886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.719915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.720284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.720315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.720697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.720725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.721075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.400 [2024-11-27 10:03:37.721103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.400 qpair failed and we were unable to recover it. 00:31:22.400 [2024-11-27 10:03:37.721484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.721514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.721886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.721921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.722282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.722312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.722697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.722725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.723102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.723130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.723510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.723541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.723895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.723923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.724257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.724290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.724611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.724640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.724986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.725015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.725263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.725297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.725679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.725709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.726053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.726081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.726479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.726510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.726857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.726885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.727235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.727266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.727625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.727654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.728011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.728040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.728390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.728420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.728681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.728710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.729052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.729080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.729473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.729503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.729870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.729898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.730313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.730342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.730689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.730716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.731089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.731117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.731489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.731518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.731873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.731900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.732267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.732298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.732617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.732647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.733032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.733061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.733415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.733445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.733821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.733850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.734223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.734253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.734634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.734662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.735023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.735053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.735400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.735430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.735801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.735829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.401 qpair failed and we were unable to recover it. 00:31:22.401 [2024-11-27 10:03:37.736187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.401 [2024-11-27 10:03:37.736217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.736583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.736611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.736848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.736879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.737180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.737211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.737568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.737598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.737986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.738014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.738353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.738382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.738781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.738809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.739178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.739208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.739473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.739501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.739840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.739868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.740232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.740263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.740641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.740669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.741025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.741053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.741424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.741454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.741813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.741841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.742074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.742103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.742482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.742512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.742892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.742921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.743277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.743307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.743659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.743688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.744059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.744088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.744463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.744495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.744840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.744876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.745199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.745228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.745584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.745612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.745980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.746008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.746349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.746380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.746787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.746816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.747070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.747097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.747460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.747489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.747829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.747864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.748195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.748226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.748586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.748616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.748944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.748972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.749302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.749332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.749687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.749716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.750049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.750080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.402 [2024-11-27 10:03:37.750468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.402 [2024-11-27 10:03:37.750497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.402 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.750831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.750861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.751225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.751254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.751630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.751658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.752029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.752058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.752398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.752428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.752797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.752827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.753213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.753244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.753608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.753636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.753957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.753987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.754356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.754385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.754753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.754783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.755153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.755211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.755546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.755575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.755952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.755980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.756355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.756387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.756726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.756755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.757126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.757156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.757524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.757553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.757894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.757923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.758268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.758307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.758689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.758718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.759090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.759121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.759490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.759520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.759887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.759917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.760279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.760310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.760694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.760724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.761086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.761116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.761502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.761531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.761912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.761942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.762304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.762334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.762706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.762734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.763099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.763129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.763508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.763538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.763915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.763944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.764310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.764340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.764686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.764715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.765046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.765075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.403 qpair failed and we were unable to recover it. 00:31:22.403 [2024-11-27 10:03:37.765356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.403 [2024-11-27 10:03:37.765386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.765735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.765765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.766140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.766186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.766527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.766557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.766903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.766932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.767329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.767359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.767708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.767736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.768084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.768112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.768451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.768482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.768832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.768860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.769196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.769228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.769633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.769661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.769896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.769923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.770265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.770294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.770683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.770712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.771068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.771096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.771496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.771526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.771891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.771921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.772295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.772324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.772661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.772691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.773027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.773055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.773416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.773447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.773813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.773842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.774198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.774236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.774586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.774614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.774959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.774989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.775317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.775346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.775686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.775715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.776096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.776125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.776490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.776520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.776873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.776901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.777270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.777301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.777685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.777714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.778065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.778095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.404 [2024-11-27 10:03:37.778406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.404 [2024-11-27 10:03:37.778438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.404 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.778819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.778848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.779089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.779123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.779513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.779544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.779981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.780010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.780376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.780406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.780776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.780804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.781185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.781215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.781572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.781600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.781979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.782007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.782384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.782414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.782742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.782770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.783099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.783128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.783501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.783532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.783868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.783897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.784277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.784306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.784538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.784577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.784991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.785019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.785380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.785412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.785744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.785773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.786151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.786191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.786564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.786593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.786924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.786954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.787279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.787315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.787664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.787693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.788079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.788107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.788476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.788509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.788767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.788800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.789139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.789182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.789513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.789541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.789780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.789808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.790147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.790188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.790529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.790559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.790917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.790945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.791271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.791304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.791699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.791727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.791971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.792003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.792364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.792394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.792797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.405 [2024-11-27 10:03:37.792825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.405 qpair failed and we were unable to recover it. 00:31:22.405 [2024-11-27 10:03:37.793184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.793213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.793544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.793573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.793945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.793973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.794340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.794369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.794735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.794770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.795013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.795042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.795286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.795318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.795640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.795668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.796025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.796053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.796410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.796440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.796812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.796841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.797222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.797262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.797604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.797633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.797991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.798022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.798349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.798380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.798718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.798748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.798990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.799019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.799407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.799437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.799781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.799811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.800169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.800198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.800562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.800590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.800922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.800952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.801312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.801342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.801681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.801711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.802042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.802071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.802422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.802455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.802797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.802826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.803146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.803191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.803547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.803576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.803940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.803970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.804347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.804378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.804605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.804642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.804993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.805022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.805341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.805371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.805748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.805778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.806013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.806041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.806392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.806423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.806691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.406 [2024-11-27 10:03:37.806719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.406 qpair failed and we were unable to recover it. 00:31:22.406 [2024-11-27 10:03:37.807073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.807102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.807494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.807526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.807902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.807932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.808296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.808328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.808707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.808737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.809082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.809112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.809459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.809489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.809811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.809842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.810250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.810280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.810639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.810677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.811033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.811062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.811424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.811455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.811787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.811815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.812128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.812174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.812499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.812528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.812903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.812932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.813311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.813343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.813714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.813743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.814093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.814123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.814478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.814509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.814761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.814791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.815081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.815111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.815507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.815537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.815791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.815822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.816185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.816216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.816531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.816560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.816878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.816908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.817244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.817277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.817591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.817622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.817940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.817971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.818354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.818384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.818720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.818749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.819105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.819134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.819498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.819528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.819894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.819925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.820147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.820191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.820453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.820485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.820855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.820885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.407 [2024-11-27 10:03:37.821170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.407 [2024-11-27 10:03:37.821201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.407 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.821513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.821543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.821866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.821896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.822234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.822265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.822644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.822673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.823055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.823085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.823439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.823469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.823812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.823843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.824221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.824251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.824597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.824627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.824961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.824991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.825320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.825357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.825697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.825726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.826057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.826088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.826446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.826477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.826785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.826816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.827190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.827220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.827592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.827621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.827952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.827982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.828321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.828351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.828706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.828734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.829095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.829125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.829530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.829559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.829999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.830039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.830405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.830437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.830820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.830850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.831198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.831229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.831588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.831619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.831952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.831980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.832362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.832391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.832712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.832743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.832982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.833014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.833298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.833328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.833651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.833680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.408 [2024-11-27 10:03:37.834023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.408 [2024-11-27 10:03:37.834052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.408 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.834406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.834436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.834687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.834716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.835082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.835112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.835469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.835501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.835858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.835888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.836243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.836273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.836643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.836673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.837032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.837061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.837410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.837441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.837803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.837832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.838155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.838202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.838551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.838580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.838957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.838986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.839335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.839366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.839724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.839753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.840119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.840155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.840565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.840596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.840915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.840973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.841193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.841224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.841641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.841671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.842030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.842059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.842409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.842440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.842802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.842831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.843205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.843237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.843494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.843523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.843851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.843880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.844113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.844141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.409 [2024-11-27 10:03:37.844515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.409 [2024-11-27 10:03:37.844544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.409 qpair failed and we were unable to recover it. 00:31:22.682 [2024-11-27 10:03:37.844769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.682 [2024-11-27 10:03:37.844803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.682 qpair failed and we were unable to recover it. 00:31:22.682 [2024-11-27 10:03:37.845183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.682 [2024-11-27 10:03:37.845216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.682 qpair failed and we were unable to recover it. 00:31:22.682 [2024-11-27 10:03:37.845517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.682 [2024-11-27 10:03:37.845545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.682 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.845881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.845909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.846234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.846265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.846553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.846582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.846938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.846968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.847322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.847353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.847692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.847720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.848056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.848085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.848502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.848531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.848910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.848941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.849257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.849288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.849615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.849647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.850007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.850037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.850374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.850407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.850752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.850782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.851221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.851251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.851481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.851509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.851896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.851926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.852303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.852332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.852689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.852717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.853050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.853079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.853422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.853453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.853791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.853819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.854147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.854191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.854539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.854570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.854814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.854842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.855240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.855272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.855591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.855623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.855980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.856010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.856331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.856362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.856695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.856725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.857063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.857091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.857452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.857484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.857789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.857819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.858140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.858198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.858565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.858594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.858971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.858999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.859343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.683 [2024-11-27 10:03:37.859373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.683 qpair failed and we were unable to recover it. 00:31:22.683 [2024-11-27 10:03:37.859731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.859760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.860140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.860182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.860545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.860576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.860944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.860973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.861324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.861356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.861655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.861684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.862045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.862073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.862413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.862442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.862761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.862791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.863140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.863194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.863534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.863573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.863894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.863923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.864208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.864237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.864596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.864624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.864961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.864992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.865362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.865398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.865688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.865717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.865970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.865999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.866385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.866414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.866755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.866784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.867145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.867188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.867468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.867496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.867848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.867879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.868124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.868153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.868521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.868549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.868912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.868940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.869206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.869241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.869599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.869629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.869995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.870024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.870298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.870329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.870695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.870724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.871108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.871137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.871502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.871532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.871895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.871925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.872252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.872281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.872622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.872652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.872988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.873017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.684 [2024-11-27 10:03:37.873389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.684 [2024-11-27 10:03:37.873419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.684 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.873664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.873693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.874062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.874091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.874441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.874471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.874834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.874865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.875094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.875131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.875536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.875566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.875923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.875952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.876109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.876139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.876501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.876532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.876877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.876907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.877226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.877258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.877658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.877687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.878052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.878079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.878535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.878565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.878922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.878951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.879313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.879343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.879718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.879746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.880104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.880132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.880519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.880551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.880875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.880903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.881289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.881322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.881697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.881725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.882075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.882103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.882395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.882426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.882788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.882818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.883171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.883202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.883536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.883564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.883940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.883968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.884330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.884361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.884712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.884740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.885139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.885181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.885613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.885650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.885996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.886027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.886336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.886367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.886687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.886717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.887094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.887123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.887486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.887516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.685 [2024-11-27 10:03:37.887841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.685 [2024-11-27 10:03:37.887872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.685 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.888188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.888218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.888551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.888579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.888952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.888981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.889304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.889336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.889706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.889736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.890107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.890136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.890440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.890470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.890815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.890845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.891184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.891216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.891572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.891600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.891921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.891951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.892284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.892314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.892708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.892736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.892983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.893015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.893410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.893442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.893791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.893820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.894157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.894199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.894623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.894651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.895013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.895043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.895456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.895486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.895841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.895869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.896182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.896214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.896587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.896616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.897005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.897034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.897414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.897445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.897661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.897691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.897944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.897976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.898337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.898368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.898712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.898743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.899128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.899156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.899572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.899602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.899953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.899982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.900324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.900354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.900686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.900716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.901176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.901269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.902029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.902065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.902541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.902637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.902967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.686 [2024-11-27 10:03:37.903003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.686 qpair failed and we were unable to recover it. 00:31:22.686 [2024-11-27 10:03:37.903386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.903415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.903832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.903864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.904220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.904250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.904568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.904595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.904843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.904868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.905228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.905255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.905621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.905650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.906036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.906064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.906465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.906493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.907640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.907694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.908100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.908129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.908498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.908527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.908885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.908912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.909317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.909347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.909675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.909703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.910046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.910074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.910455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.910482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.910842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.910868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.911206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.911233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.911583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.911610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.911947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.911974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.912304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.912331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.912698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.912728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.913097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.913129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.913398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.913436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.913773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.913805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.914203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.914235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.914669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.914698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.914966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.914995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.915283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.915314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.915677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.687 [2024-11-27 10:03:37.915706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.687 qpair failed and we were unable to recover it. 00:31:22.687 [2024-11-27 10:03:37.916949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.916993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.917356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.917389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.917742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.917770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.918127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.918156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.918574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.918604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.918960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.918992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.919383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.919416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.919664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.919693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.920077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.920111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.920559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.920589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.920953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.920982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.921355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.921385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.921754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.921783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.922149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.922188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.922542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.922572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.922915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.922944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.923309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.923340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.923579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.923607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.924004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.924033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.924378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.924409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.924777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.924805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.925080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.925108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.925509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.925541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.925868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.925896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.926261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.926292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.926655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.926684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.927057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.927086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.927464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.927495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.927851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.927881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.928049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.928082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.928468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.928498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.928851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.928880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.929225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.929256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.929510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.929543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.929950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.929980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.930342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.930372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.930771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.930799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.688 [2024-11-27 10:03:37.931043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.688 [2024-11-27 10:03:37.931075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.688 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.931461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.931493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.931855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.931883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.932244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.932275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.932635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.932663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.933029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.933057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.933498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.933528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.933873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.933903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.934271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.934316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.934724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.934753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.935093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.935123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.935488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.935518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.935886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.935915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.936245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.936276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.936637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.936666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.937021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.937050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.937422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.937452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.937812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.937841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.938215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.938245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.938601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.938630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.939011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.939042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.939415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.939446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.939818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.939847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.940212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.940242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.940515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.940546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.940782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.940814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.941222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.941253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.941622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.941651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.942013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.942042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.942416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.942446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.942711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.942739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.943090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.943119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.943491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.943522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.943757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.943789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.944038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.944066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.944436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.944467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.944843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.944872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.945118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.689 [2024-11-27 10:03:37.945151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.689 qpair failed and we were unable to recover it. 00:31:22.689 [2024-11-27 10:03:37.945422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.945451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.945699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.945729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.946078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.946107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.946475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.946505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.946853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.946882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.947059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.947087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.947452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.947482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.947646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.947675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.947930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.947959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.948351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.948382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.948743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.948781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.949148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.949193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.949605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.949636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.949981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.950011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.950343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.950374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.950596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.950624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.951078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.951108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.951357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.951387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.951734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.951762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.952192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.952222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.952580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.952609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.952975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.953004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.953382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.953412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.953766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.953794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.954172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.954203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.954562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.954590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.954946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.954975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.955350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.955381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.955737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.955766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.956123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.956151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.956536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.956566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.956920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.956951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.957325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.957355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.957732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.957761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.958116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.958145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.958601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.958631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.958987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.959015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.690 [2024-11-27 10:03:37.959393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.690 [2024-11-27 10:03:37.959423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.690 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.959794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.959823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.960221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.960250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.960575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.960607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.960972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.961000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.961356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.961387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.961626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.961655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.962006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.962035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.962394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.962424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.962784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.962814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.963064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.963092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.963428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.963458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.963823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.963854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.964209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.964256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.964511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.964540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.964880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.964909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.965276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.965308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.965660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.965693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.966047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.966078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.966404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.966436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.966799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.966827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.967230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.967260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.967634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.967663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.967901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.967928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.968307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.968337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.968719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.968748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.969106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.969135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.969547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.969577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.969941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.969975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.970341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.970372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.970741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.970771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.971123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.971152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.971512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.971541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.971925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.971954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.972311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.972342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.972703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.972731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.973102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.973131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.973369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.973398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.691 [2024-11-27 10:03:37.973789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.691 [2024-11-27 10:03:37.973817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.691 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.974185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.974216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.974561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.974590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.974981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.975010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.975192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.975221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.975629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.975658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.976018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.976046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.976404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.976434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.976801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.976830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.977196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.977229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.977618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.977648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.978012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.978041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.978275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.978309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.978682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.978712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.978945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.978973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.979339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.979377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.979729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.979757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.980202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.980232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.980494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.980522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.980760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.980793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.981148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.981187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.981551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.981579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.981931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.981960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.982331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.982362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.982708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.982741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.983108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.983137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.983490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.983521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.983928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.983958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.984333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.984364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.984727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.984765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.985101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.985130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.985515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.985545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.985896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.985925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.692 [2024-11-27 10:03:37.986283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.692 [2024-11-27 10:03:37.986313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.692 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.986691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.986721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.987060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.987089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.987460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.987490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.987853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.987881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.988231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.988262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.988639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.988668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.989028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.989056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.989427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.989464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.989829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.989859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.990225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.990259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.990620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.990647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.991014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.991043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.991412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.991448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.991810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.991841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.992198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.992228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.992596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.992625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.992986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.993014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.993382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.993411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.993771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.993800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.994177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.994207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.994555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.994584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.994954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.994997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.995332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.995363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.995725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.995756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.996112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.996140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.996528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.996559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.996919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.996947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.997298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.997329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.997734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.997762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.998135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.998170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.998513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.998541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.998923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.998952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.999289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.999319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:37.999676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:37.999704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:38.000048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:38.000076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:38.000447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:38.000484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:38.000817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:38.000846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.693 qpair failed and we were unable to recover it. 00:31:22.693 [2024-11-27 10:03:38.001145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.693 [2024-11-27 10:03:38.001182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.001532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.001560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.001921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.001951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.002302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.002336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.002694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.002723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.002993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.003021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.003392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.003423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.003664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.003696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.004078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.004110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.004467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.004496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.004739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.004766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.005061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.005091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.005438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.005469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.005729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.005757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.006179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.006208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.006611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.006639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.007039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.007067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.007410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.007448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.007792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.007821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.008076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.008104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.008474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.008504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.008865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.008894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.009254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.009292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.009656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.009684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.010045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.010085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.010414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.010445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.010811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.010840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.011188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.011218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.011595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.011624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.011978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.012007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.012364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.012393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.012757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.012785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.013133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.013169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.013577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.013606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.013870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.013897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.014268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.014297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.014670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.014700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.015080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.015108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.694 [2024-11-27 10:03:38.017081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.694 [2024-11-27 10:03:38.017141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.694 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.017549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.017582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.017958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.017992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.018347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.018378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.018770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.018805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.019054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.019084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.019467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.019497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.019918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.019946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.020354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.020385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.020708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.020739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.021083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.021111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.021454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.021486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.021889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.021918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.022260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.022290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.022660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.022695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.023058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.023088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.023434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.023464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.023816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.023845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.024228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.024258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.024592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.024620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.025000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.025028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.025382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.025412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.025791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.025819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.026189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.026221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.026591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.026619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.027005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.027033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.027285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.027338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.027712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.027741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.028109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.028138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.028523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.028553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.028894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.028922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.029315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.029346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.029703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.029734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.030100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.030128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.030368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.030401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.030758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.030788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.031177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.031207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.031551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.695 [2024-11-27 10:03:38.031583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.695 qpair failed and we were unable to recover it. 00:31:22.695 [2024-11-27 10:03:38.031966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.031995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.032431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.032462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.032860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.032890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.033248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.033278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.033631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.033661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.034090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.034118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.034469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.034500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.034827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.034857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.035184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.035215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.035575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.035608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.035948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.035978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.036276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.036307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.036726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.036756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.037110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.037139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.037518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.037547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.037917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.037946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.038283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.038315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.038709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.038739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.039004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.039036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.039354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.039385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.039746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.039774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.040230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.040260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.040668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.040699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.041050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.041078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.041422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.041451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.041810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.041838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.042231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.042262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.042641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.042670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.043048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.043091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.043446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.043479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.043725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.043753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.044114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.044143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.044525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.044555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.696 [2024-11-27 10:03:38.044947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.696 [2024-11-27 10:03:38.044978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.696 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.045223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.045254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.045603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.045634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.045929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.045957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.046314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.046344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.046712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.046741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.047130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.047166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.047533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.047561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.047835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.047865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.048233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.048264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.048651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.048681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.048925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.048958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.049365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.049395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.049759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.049787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.050187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.050217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.050513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.050541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.050875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.050903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.051273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.051302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.051738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.051767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.052176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.052206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.052646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.052674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.052938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.052966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.053318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.053350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.053730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.053758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.054138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.054176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.054468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.054497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.054860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.054889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.055132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.055171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.055328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.055360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.055787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.055816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.056192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.056221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.056483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.056515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.056717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.056746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.057152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.057193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.057547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.057576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.057930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.057978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.058240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.058270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.058645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.058673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.697 [2024-11-27 10:03:38.058931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.697 [2024-11-27 10:03:38.058960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.697 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.059325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.059355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.059627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.059655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.060025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.060053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.060482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.060512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.060834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.060863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.061093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.061126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.061437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.061468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.061757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.061787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.062138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.062179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.062562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.062592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.062928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.062957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.063327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.063357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.063740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.063770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.064027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.064056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.064472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.064503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.064882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.064911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.065285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.065313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.065672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.065701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.066064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.066094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.066453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.066482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.066733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.066762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.067108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.067136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.067406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.067435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.067789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.067819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.068052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.068082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.068504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.068535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.068899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.068927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.069290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.069319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.069684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.069712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.070116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.070146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.070502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.070533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.070894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.070922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.071282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.071312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.071679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.071708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.072074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.072103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.072373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.072406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.072799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.072837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.698 [2024-11-27 10:03:38.073182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.698 [2024-11-27 10:03:38.073214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.698 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.073476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.073505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.073845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.073875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.074244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.074273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.074524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.074552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.074922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.074950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.075291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.075322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.075685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.075713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.076067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.076095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.076574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.076604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.076965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.076994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.077264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.077296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.077666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.077695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.078064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.078093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.078337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.078367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.078727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.078758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.079110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.079139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.079544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.079575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.079797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.079826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.080154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.080191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.080529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.080559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.080910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.080939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.081290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.081320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.081564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.081592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.081909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.081938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.082184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.082214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.082489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.082518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.082872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.082902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.083276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.083306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.083685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.083713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.084059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.084089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.084240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.084272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.084526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.084555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.084893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.084922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.085298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.085329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.085697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.085725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.086098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.086127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.086380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.086411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.699 [2024-11-27 10:03:38.086790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.699 [2024-11-27 10:03:38.086820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.699 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.087196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.087234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.087553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.087582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.087947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.087976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.088313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.088343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.088700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.088731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.089082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.089111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.089465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.089496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.089857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.089887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.090243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.090273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.090656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.090686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.091046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.091074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.091462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.091492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.091844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.091872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.092223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.092253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.092584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.092613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.092983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.093011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.093342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.093380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.093735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.093765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.094133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.094172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.094540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.094570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.094937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.094966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.095306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.095338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.095705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.095733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.096090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.096120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.096539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.096568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.096917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.096946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.097341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.097371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.097635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.097664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.098013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.098043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.098442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.098473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.098822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.098851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.099102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.099130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.099459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.099489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.099855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.099883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.100242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.100273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.100628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.100661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.700 [2024-11-27 10:03:38.100900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.700 [2024-11-27 10:03:38.100928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.700 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.101276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.101307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.101640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.101669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.102035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.102063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.102416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.102452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.102794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.102823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.103189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.103219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.103587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.103615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.103932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.103960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.104315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.104344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.104610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.104638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.105029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.105059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.105402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.105432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.105782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.105812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.106060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.106090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.106465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.106495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.106890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.106921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.107275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.107305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.107689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.107717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.108072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.108100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.108459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.108489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.108836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.108864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.109228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.109257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.109647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.109678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.110040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.110069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.110419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.110449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.110819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.110847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.111008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.111036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.111392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.111422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.111798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.111828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.112204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.701 [2024-11-27 10:03:38.112234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.701 qpair failed and we were unable to recover it. 00:31:22.701 [2024-11-27 10:03:38.112583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.112613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.112863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.112891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.113248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.113278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.113663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.113691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.114050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.114082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.114448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.114478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.114833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.114861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.115226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.115255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.115509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.115541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.115919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.115950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.116310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.116339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.116706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.116734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.117096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.117125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.117368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.117398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.117752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.117782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.118139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.118177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.118522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.118552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.118898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.118926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.119299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.119330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.119664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.119693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.120070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.120099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.120506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.120535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.120885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.120915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.121216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.121246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.121636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.121664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.122018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.122047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.122470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.122500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.122845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.122876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.123242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.123272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.123660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.123689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.124046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.124074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.124449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.124479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.124847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.124877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.125302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.125332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.125690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.125719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.126080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.126108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.126466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.126495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.126861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.702 [2024-11-27 10:03:38.126890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.702 qpair failed and we were unable to recover it. 00:31:22.702 [2024-11-27 10:03:38.127258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.127290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.127641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.127672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.128033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.128068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.128406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.128437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.128794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.128822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.129078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.129105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.129457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.129486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.129864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.129895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.130235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.130265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.130639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.130667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.131038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.131066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.131321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.131350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.131713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.131741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.132115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.132145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.132515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.132543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.132907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.132935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.133296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.133326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.133692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.133721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.134071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.134101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.134476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.134505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.134738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.134766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.135115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.135143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.135463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.135492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.135853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.135880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.136243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.703 [2024-11-27 10:03:38.136274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.703 qpair failed and we were unable to recover it. 00:31:22.703 [2024-11-27 10:03:38.136642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.136673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.137034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.137065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.139002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.139072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.139552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.139593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.139964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.139995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.140349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.140380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.140760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.140789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.141044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.141076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.141461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.141495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.141831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.141860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.142224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.142254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.142642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.142670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.143021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.143050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.143396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.143425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.143762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.143792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.144040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.144068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.144433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.144464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.144827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.144862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.145239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.145268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.979 qpair failed and we were unable to recover it. 00:31:22.979 [2024-11-27 10:03:38.145634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.979 [2024-11-27 10:03:38.145663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.145916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.145944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.146277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.146308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.146658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.146688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.147042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.147072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.147412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.147442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.147805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.147835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.148204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.148234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.148599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.148629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.149005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.149033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.149387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.149417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.149769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.149797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.150172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.150202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.150560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.150589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.150953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.150983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.151324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.151355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.151716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.151744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.151996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.152028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.152389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.152422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.152765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.152795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.153170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.153201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.153640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.153668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.154052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.154080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.154449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.154478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.154850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.154879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.155325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.155355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.155691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.155722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.156083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.156113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.156488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.156518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.156893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.156922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.157184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.157215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.157603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.157631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.157992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.158021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.158391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.158422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.158797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.158826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.159191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.159220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.159576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.159605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.159860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.159890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.980 qpair failed and we were unable to recover it. 00:31:22.980 [2024-11-27 10:03:38.160230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.980 [2024-11-27 10:03:38.160267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.160639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.160668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.161025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.161053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.161413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.161442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.161808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.161837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.162202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.162233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.162675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.162704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.163112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.163141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.163536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.163565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.163948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.163976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.164346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.164377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.164756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.164786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.165148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.165201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.165545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.165574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.165929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.165958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.166322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.166352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.166757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.166785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.167138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.167175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.167527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.167555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.167922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.167950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.168303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.168332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.168705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.168733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.169142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.169181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.169578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.169607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.169953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.169982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.170344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.170374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.170627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.170655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.171011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.171041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.171384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.171414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.171774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.171802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.172167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.172197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.172571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.172599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.172958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.172986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.173356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.173386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.173755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.173785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.174144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.174183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.174531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.174560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.174906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.981 [2024-11-27 10:03:38.174936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.981 qpair failed and we were unable to recover it. 00:31:22.981 [2024-11-27 10:03:38.175295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.175325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.175691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.175719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.176068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.176105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.176505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.176535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.176889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.176917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.177291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.177320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.177701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.177729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.178097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.178126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.178499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.178530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.178899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.178928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.179369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.179398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.179768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.179796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.180166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.180198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.180558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.180586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.180947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.180976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.181342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.181373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.181738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.181766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.182129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.182156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.182556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.182586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.182936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.182964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.183324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.183353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.183691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.183720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.184080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.184108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.184461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.184491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.184831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.184862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.185220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.185249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.185628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.185666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.186028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.186057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.186409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.186438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.186839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.186869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.187218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.187248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.187608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.187636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.187801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.187832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.188215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.188245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.188618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.188646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.188996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.189027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.189388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.189419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.982 [2024-11-27 10:03:38.189740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.982 [2024-11-27 10:03:38.189769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.982 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.190113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.190141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.190477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.190507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.190870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.190900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.191269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.191301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.191656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.191691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.192050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.192078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.192416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.192445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.192803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.192832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.193193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.193225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.193593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.193623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.193989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.194018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.194392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.194423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.194776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.194804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.195181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.195211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.195558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.195586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.195952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.195981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.196344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.196374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.196733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.196771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.197122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.197151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.197485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.197514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.197873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.197903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.198275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.198305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.198670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.198700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.199056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.199085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.199457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.199487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.199794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.199824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.200151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.200192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.200589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.200620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.200884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.200913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.201282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.201313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.201691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.201720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.202086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.202115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.202466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.202495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.202864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.202895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.203244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.203273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.203621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.983 [2024-11-27 10:03:38.203650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.983 qpair failed and we were unable to recover it. 00:31:22.983 [2024-11-27 10:03:38.204008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.204037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.204410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.204441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.204808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.204837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.205204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.205234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.205593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.205623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.205985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.206014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.206386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.206415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.206790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.206818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.207194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.207232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.207599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.207627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.207877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.207910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.208271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.208302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.208662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.208690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.209059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.209087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.209442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.209473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.209838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.209867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.210238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.210267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.210642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.210670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.211053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.211081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.211420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.211450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.211831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.211862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.212225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.212255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.212643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.212673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.213037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.213065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.213427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.213457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.213819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.213849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.214207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.214237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.214641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.214670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.215026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.215055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.215466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.215496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.215845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.215877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.216120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.216149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.216525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.984 [2024-11-27 10:03:38.216555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.984 qpair failed and we were unable to recover it. 00:31:22.984 [2024-11-27 10:03:38.216995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.217024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.217384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.217415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.217784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.217813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.218076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.218104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.218513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.218545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.218883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.218912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.219280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.219310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.219667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.219695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.220052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.220081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.220439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.220469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.220826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.220857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.221222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.221253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.221620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.221648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.222016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.222043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.222416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.222453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.222847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.222882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.223128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.223178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.223543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.223572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.223932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.223961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.224326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.224355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.224692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.224720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.225077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.225107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.225474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.225504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.225863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.225892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.226255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.226285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.226662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.226691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.227047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.227076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.227427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.227457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.227820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.227850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.228211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.228241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.228620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.228648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.229007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.229037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.229394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.229423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.229678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.229706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.230065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.230095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.230349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.230378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.985 [2024-11-27 10:03:38.230728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.985 [2024-11-27 10:03:38.230756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.985 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.231121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.231149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.231553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.231582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.231841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.231874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.232259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.232291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.232685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.232714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.233087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.233116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.233476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.233505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.233865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.233896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.234233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.234263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.234619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.234647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.234932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.234959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.235322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.235351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.235705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.235733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.236109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.236137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.236501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.236531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.236897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.236925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.237312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.237343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.237575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.237607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.237967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.238001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.238341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.238373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.238730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.238761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.239121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.239150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.239405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.239438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.239815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.239844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.240202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.240231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.240635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.240664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.241022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.241052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.241395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.241425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.241761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.241790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.242194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.242224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.242590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.242620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.242983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.243013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.243355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.243385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.243749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.243777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.244149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.244199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.244578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.244607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.244982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.245011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.245379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.986 [2024-11-27 10:03:38.245409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.986 qpair failed and we were unable to recover it. 00:31:22.986 [2024-11-27 10:03:38.245760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.245788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.246150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.246191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.246521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.246549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.246908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.246936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.247300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.247330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.247680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.247709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.248063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.248091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.248460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.248493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.248738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.248766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.249104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.249132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.249462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.249491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.249857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.249887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.250254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.250284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.250656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.250684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.251042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.251070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.251415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.251446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.251807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.251835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.252202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.252233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.252379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.252411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.252776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.252804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.253182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.253219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.253585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.253615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.253984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.254014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.254391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.254420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.254781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.254810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.255183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.255214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.255586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.255615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.255977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.256004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.256425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.256455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.256809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.256839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.257200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.257230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.257604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.257633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.257989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.258018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.258401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.258430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.258796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.258825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.259200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.259229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.259605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.259633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.259993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.987 [2024-11-27 10:03:38.260021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.987 qpair failed and we were unable to recover it. 00:31:22.987 [2024-11-27 10:03:38.260463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.260493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.260837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.260868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.261231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.261261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.261628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.261656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.262009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.262039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.262389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.262419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.262786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.262814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.263144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.263214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.263550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.263580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.263846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.263878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.264183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.264214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.264478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.264507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.264918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.264946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.265120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.265151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.265515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.265545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.265907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.265938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.266295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.266325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.266708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.266736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.267082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.267111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.267487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.267517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.267873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.267903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.268267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.268296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.268673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.268708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.269064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.269093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.269330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.269362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.269703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.269733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.270102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.270132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.270489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.270518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.270880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.270907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.271198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.271229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.271592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.271619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.271969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.271998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.988 [2024-11-27 10:03:38.272387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.988 [2024-11-27 10:03:38.272418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.988 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.272678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.272707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.273053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.273083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.273429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.273458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.273814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.273843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.274206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.274236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.274619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.274647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.274988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.275016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.275427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.275457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.275822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.275858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.276217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.276247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.276618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.276646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.276997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.277026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.277399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.277429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.277790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.277819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.278056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.278083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.278526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.278555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.278893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.278925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.279296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.279326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.279711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.279740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.280097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.280126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.280487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.280519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.280880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.280909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.281274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.281305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.281676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.281708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.282069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.282099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.283886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.283946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.284265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.284307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.284676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.284705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.285074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.285105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.285458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.285499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.285851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.285879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.286236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.286266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.286644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.286672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.288486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.288547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.288976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.289011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.289371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.289402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.289777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.289805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.989 [2024-11-27 10:03:38.290115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.989 [2024-11-27 10:03:38.290144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.989 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.290504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.290533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.290889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.290918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.291288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.291319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.291682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.291713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.292078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.292107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.293757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.293816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.294198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.294234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.294601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.294633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.294999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.295028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.295391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.295422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.295787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.295825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.296191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.296222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.296599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.296629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.296983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.297012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.297354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.297389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.297749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.297778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.298141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.298182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.298523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.298552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.298941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.298972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.299319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.299349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.299718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.299746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.300096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.300125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.300487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.300517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.300892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.300922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.301279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.301310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.301628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.301659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.302016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.302045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.302396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.302426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.302706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.302735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.303104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.303135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.303499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.303529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.303883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.303919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.304280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.304311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.304654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.304685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.305059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.305089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.305453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.305487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.305838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.305868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.990 [2024-11-27 10:03:38.306218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.990 [2024-11-27 10:03:38.306249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.990 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.306524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.306553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.306922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.306951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.307324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.307357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.307601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.307630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.308037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.308066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.308401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.308433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.308795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.308824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.309179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.309211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.309556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.309586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.309922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.309952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.310308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.310339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.310589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.310619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.310966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.310996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.311325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.311356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.311716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.311746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.312104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.312136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.312406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.312440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.312810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.312840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.313205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.313237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.313602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.313631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.313968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.314000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.314336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.314367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.314722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.314752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.315113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.315143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.315563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.315593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.315949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.315978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.316336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.316367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.316593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.316627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.316976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.317007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.317381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.317413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.317751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.317782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.318220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.318251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.318589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.318621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.318877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.318908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.319270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.319300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.319682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.319711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.320101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.320131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.991 qpair failed and we were unable to recover it. 00:31:22.991 [2024-11-27 10:03:38.320537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.991 [2024-11-27 10:03:38.320567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.320815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.320844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.321224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.321257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.321590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.321621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.321970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.322001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.322384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.322415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.322752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.322784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.323134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.323174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.323449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.323478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.325118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.325206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.325571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.325604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.325969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.326008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.326324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.326356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.326694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.326725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.327102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.327131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.327392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.327422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.327797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.327829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.328179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.328213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.328572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.328603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.328843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.328873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.329271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.329301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.329660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.329689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.330045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.330074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.330343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.330380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.330769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.330798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.331156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.331202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.331562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.331591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.331947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.331978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.332218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.332251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.332538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.332568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.332950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.332980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.333356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.333387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.333742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.333770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.334103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.334131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.334307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.334338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.334710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.334739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.335064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.992 [2024-11-27 10:03:38.335093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.992 qpair failed and we were unable to recover it. 00:31:22.992 [2024-11-27 10:03:38.335486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.335520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.335863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.335893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.336115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.336145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.336508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.336537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.336832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.336861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.337229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.337262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.337594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.337623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.337879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.337912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.338288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.338320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.338690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.338719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.339086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.339115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.339516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.339547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.339903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.339932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.340301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.340333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.340698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.340727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.340967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.340995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.341338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.341369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.341632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.341662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.341928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.341959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.342392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.342504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.342898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.342936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.343247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.343281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.343705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.343812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.344225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.344266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.344643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.344673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.344916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.344944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.345219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.345275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.345541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.345571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.345925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.345955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.346390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.346424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.346758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.346787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.347025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.347054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.347437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.347469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.993 [2024-11-27 10:03:38.347799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.993 [2024-11-27 10:03:38.347829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.993 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.348198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.348230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.348552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.348580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.348943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.348973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.349277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.349311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.349694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.349724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.350064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.350095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.350499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.350531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.350775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.350806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.351137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.351179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.351526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.351557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.351897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.351926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.352195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.352225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.352576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.352605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.352976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.353006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.353251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.353281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.353653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.353683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.354064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.354096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.354321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.354351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.354713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.354742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.355101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.355137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.355493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.355522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.355891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.355922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.356172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.356208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.356595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.356624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.356990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.357019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.357385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.357417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.357759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.357789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.358168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.358199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.358453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.358482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.358703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.994 [2024-11-27 10:03:38.358732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.994 qpair failed and we were unable to recover it. 00:31:22.994 [2024-11-27 10:03:38.358990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.359017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.359365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.359395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.359749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.359778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.360137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.360191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.362250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.362317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.362617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.362651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.363492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.363538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.363943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.363978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.364339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.364370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.364792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.364825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.365173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.365213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.365546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.365576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.365936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.365966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.366403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.366439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.366806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.366835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.367203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.367235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.367593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.367621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.368042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.368074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.368450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.368480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.368838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.368868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.369261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.369292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.369659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.369687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.370032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.370060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.370406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.370437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.370776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.370806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.371187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.371217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.371583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.371611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.371871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.371899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.372259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.372291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.372656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.372684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.373034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.373062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.373478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.373509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.373738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.373767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.374109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.374138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.374526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.995 [2024-11-27 10:03:38.374556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.995 qpair failed and we were unable to recover it. 00:31:22.995 [2024-11-27 10:03:38.374898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.374927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.375279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.375309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.375683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.375712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.375943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.375972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.376341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.376372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.376731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.376759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.377217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.377246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.377625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.377655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.378010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.378039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.378420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.378450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.378809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.378838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.379095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.379124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.379539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.379569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.379960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.379989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.380460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.380491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.380883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.380912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.381281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.381310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.381707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.381735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.382022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.382050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.382483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.382512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.382760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.382793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.383024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.383054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.383412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.383449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.383817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.383847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.384206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.384236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.384598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.384627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.384979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.385008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.385351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.385380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.385738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.385766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.386133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.386169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.386531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.386561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.386798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.386826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.387181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.387211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.387473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.996 [2024-11-27 10:03:38.387506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.996 qpair failed and we were unable to recover it. 00:31:22.996 [2024-11-27 10:03:38.387737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.387769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.387998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.388027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.388391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.388422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.388656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.388684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.389044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.389073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.389419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.389451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.389803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.389831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.390231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.390262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.390604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.390633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.390983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.391014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.391264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.391294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.391662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.391691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.392045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.392073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.392416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.392446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.392807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.392836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.393195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.393231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.393519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.393548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.393795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.393824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.394056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.394088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.394437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.394466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.394828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.394857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.395222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.395251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.395604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.395633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.395995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.396024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.396386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.396416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.396747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.396775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.397143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.397182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.397582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.397610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.397866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.397894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.398137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.398180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.398527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.398556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.997 qpair failed and we were unable to recover it. 00:31:22.997 [2024-11-27 10:03:38.398898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.997 [2024-11-27 10:03:38.398927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.399285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.399314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.399702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.399730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.400097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.400127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.400455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.400484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.400834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.400861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.401197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.401228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.401614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.401645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.402013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.402040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.402455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.402486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.402828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.402858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.403221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.403251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.403588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.403618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.403868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.403901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.404242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.404272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.404524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.404553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.404905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.404936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.405293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.405324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.405725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.405754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.406119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.406148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.406397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.406430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.406771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.406802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.407155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.407197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.407530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.407559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.407929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.407958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.408225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.408255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.408594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.408623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.408989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.409018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.409274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.409304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.409546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.409576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.409910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.998 [2024-11-27 10:03:38.409938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.998 qpair failed and we were unable to recover it. 00:31:22.998 [2024-11-27 10:03:38.410277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.410308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.410590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.410618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.410959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.410988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.411332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.411361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.411730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.411760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.412126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.412157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.412535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.412564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.412926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.412953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.413317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.413347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.413703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.413731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.413979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.414007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.414247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.414280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.414635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.414663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.414902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.414930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.415196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.415227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.415566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.415595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.415937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.415967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.416220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.416250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.416603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.416631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.417004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.417031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.417399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.417433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.417796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.417830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.418181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.418214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.418471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.418504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.418844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.418873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.419237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.419267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.419517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.419548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.419896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.419925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.420277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.999 [2024-11-27 10:03:38.420307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:22.999 qpair failed and we were unable to recover it. 00:31:22.999 [2024-11-27 10:03:38.420685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.420714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.421078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.421105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.421474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.421505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.421837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.421865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.422217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.422247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.422606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.422634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.422978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.423008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.423367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.423397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.423766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.423803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.424229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.424258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.424629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.424657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.424970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.425000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.425374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.425405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.425751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.425779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.426127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.426155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.426429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.426461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.426834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.426862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.427220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.427251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.427588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.427616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.427959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.427993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.428398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.428427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.428793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.428821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.429196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.429226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.429561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.429589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.429953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.429983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.430336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.430366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.430723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.430751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.000 [2024-11-27 10:03:38.431115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.000 [2024-11-27 10:03:38.431143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.000 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.431540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.431573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.431992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.432022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.432303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.432332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.432710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.432738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.433106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.433134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.433522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.433553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.433925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.433955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.434315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.434346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.434706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.434734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.434975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.435004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.435353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.435382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.435738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.435766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.436140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.436183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.436554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.436584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.436940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.436969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.437336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.437366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.437725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.437754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.438105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.438133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.438510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.438545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.438902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.438933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.439297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.439327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.439685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.439713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.440076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.440104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.440511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.440540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.440872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.440900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.441267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-11-27 10:03:38.441298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-11-27 10:03:38.441697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.441726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.442081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.442108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.442479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.442509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.442867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.442895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.443259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.443289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.443671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.443700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.444058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.444087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.444461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.444490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.444737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.444766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.445134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.445174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.445514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.445545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.445781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.445810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.446147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.446200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.446579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.446607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.446965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.446994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.447396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.447426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.447773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.447803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.448169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.448198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.448555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.448584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.448943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.448971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.449334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.449363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.449742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.449771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.450132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.450169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.450534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.450564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.450929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.450958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.451326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.451357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.451724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.451751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.452129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.452167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.452417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.452449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.452796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.452824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.453192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.453221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-11-27 10:03:38.453459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-11-27 10:03:38.453486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.453742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.453771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.454172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.454204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.454572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.454601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.454953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.454981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.457207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.457280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.457566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.457602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.457943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.457971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.458318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.458350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.458717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.458745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.459110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.459138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.459516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.459544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.459972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.460002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.460345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.460375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.460744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.460773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.461141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.461188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.461535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.461563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.461929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.461958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.462332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.462363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.462725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.462754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.463064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.463093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.463453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.463483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.463841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.463871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.464302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.464333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.464699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.464727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.465105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.465133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.465493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.465522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.465882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.465910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.466276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.466305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-11-27 10:03:38.466668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-11-27 10:03:38.466703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.467075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.467106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.467464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.467493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.467852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.467881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.468242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.468271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.468653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.468683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.469041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.469070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.469347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.469376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.469721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.469750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.469993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.470020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.470389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.470420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.470794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.470823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.471188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.471218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.471571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.471599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.471952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.471981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.472339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.472368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.472750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.472778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.473141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.473178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.473555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.473586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.473949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.473977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.474346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.474378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.474813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.474841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.475189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.475219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.475478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.475510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.475881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.475910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.476146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.476205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.476575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.476604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.476967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.477002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.477424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.477454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-11-27 10:03:38.477815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-11-27 10:03:38.477846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.478215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.478245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.478547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.478574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.478943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.478971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.479333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.479363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.479714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.479742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.480108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.480138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.480512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.480542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.480865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.480893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.481277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.481306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.481672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.481710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.482073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.482103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.482486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.482516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.482887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.482917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.483279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.483311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.483655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.483685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.484046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.484075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.484417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.484446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.484803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.484832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.485190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.485221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.485579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.485607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.485982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.486010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.486351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.486379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.486737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.486764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.487127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.487156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.487569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.487598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.487953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.487981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.488324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.488353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.488720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-11-27 10:03:38.488748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-11-27 10:03:38.489098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.489129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.489484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.489514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.489764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.489792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.490140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.490177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.490581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.490609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.491035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.491064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.491444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.491474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.491835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.491865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.492215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.492246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.492626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.492656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.493024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.493053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.493394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.493424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.493782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.493812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.494180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.494210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.494567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.494595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.494964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.494992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.495335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.495364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.495722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.495750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.496115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.496145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.496520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.496550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.496894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.496923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.497171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.497202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.497554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.497583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.497949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.497977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.498382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.498413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.498659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.498691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.499038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-11-27 10:03:38.499067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-11-27 10:03:38.499410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.499439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.499795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.499825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.500206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.500240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.500622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.500651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.500997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.501026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.501284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.501314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.501690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.501718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.502084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.502112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.502455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.502484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.502842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.502871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.503238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.503275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.503663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.503693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.504054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.504082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.504450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.504486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.504847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.504875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.505239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.505269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.505639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.505668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.506037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.506066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.506420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.506449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.506798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.506826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.507189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.507219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.507622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.507651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.508000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.508029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.508396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.508425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.508665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.508693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.508950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-11-27 10:03:38.508982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-11-27 10:03:38.509336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.509369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.509775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.509806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.510057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.510085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.510322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.510353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.510704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.510732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.511102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.511130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.511537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.511568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.511929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.511958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.512319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.512349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.512795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.512823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.513197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.513248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.513634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.513669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.514027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.514058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.514426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.514456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.514815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.514843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.515207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.515237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.515611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.515638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.515986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.516017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.516389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.516419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.516774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.516802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.517209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.517238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.517495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.517526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.517873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.517903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.518257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.518287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.518658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.518687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.519042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.519071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.519414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.519444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.519807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.519837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.520208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-11-27 10:03:38.520237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-11-27 10:03:38.520616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.520646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.520976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.521005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.521381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.521411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.521763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.521791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.522146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.522185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.522430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.522461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.522828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.522857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.523219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.523250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.523622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.523652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.524004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.524040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.524390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.524422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.524780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.524809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.525167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.525200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.525558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.525587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.525958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.525987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.526340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.526370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.526729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.526758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.527185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.527215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.527551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.527582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.527962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.527991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.528334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.528365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.528724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.528752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.529006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.529034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.529404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.529435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.529796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.529826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.530187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.530217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.530579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.530608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.530965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.530994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.531345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.531373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-11-27 10:03:38.531717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-11-27 10:03:38.531746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.532112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.532141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.532483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.532511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.532876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.532907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.533279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.533311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.533654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.533683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.534042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.534072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.534332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.534361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.534535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.534568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.534938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.534967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.535325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.535356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.535759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.535787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.536139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.536180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.536541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.536570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.536923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.536951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.537314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.537345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.537607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.537636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.537992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.538020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.538397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.538427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.538795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.538825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.539189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.539218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.539578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.539607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.539962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.539991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.540271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.540300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.540665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.540694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.541043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.541073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.541416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.541447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.541801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.541830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.542193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.542222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.542581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-11-27 10:03:38.542609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-11-27 10:03:38.542963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.542993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.543365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.543395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.543655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.543684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.544040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.544070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.544449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.544478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.544838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.544868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.545237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.545268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.545619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.545647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.545986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.546014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.546342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.546370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.546728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.546759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.547193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.547222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.547572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.547600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.547942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.547972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.548226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.548255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.548641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.548670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.549020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.549048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.549410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.549440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.549801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.549838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.550183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.550215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.552025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.552089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.552482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.552518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.552900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.552932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.553299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.553330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.553700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.553728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.554109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.554138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.554589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.554620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.554956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.554986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.555357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.555388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.555743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.555773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-11-27 10:03:38.556137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-11-27 10:03:38.556173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.556549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.556579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.556927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.556958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.557328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.557360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.557730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.557759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.558126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.558153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.558564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.558596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.558939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.558971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.559314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.559347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.559687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.559724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.560090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.560120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.560406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.560437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.560786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.560814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.561184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.561215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.561587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.561615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.561962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.561997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.562402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.562433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.562802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.562830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.563157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.563195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.563543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.563572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.563921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.563950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.564312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.564344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.564584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.564615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.564966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.564997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.565338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.565369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.565777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.565806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.566171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-11-27 10:03:38.566204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-11-27 10:03:38.566576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.566605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.566979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.567009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.567264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.567295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.567554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.567582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.567975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.568004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.568445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.568475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.568831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.568861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.569204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.569233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.569587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.569616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.569984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.570014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.570313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.570349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.570772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.570803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.571059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.571089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.571436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.571469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.571814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.571846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.572193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.572224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.572496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.572525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.572797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.572826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.573070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.573098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.573488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.573518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.573835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.573866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.574249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.574280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.574621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.574650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.574987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.575018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.575410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.575441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.575808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.575838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.576139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.576195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.576576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.576606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.576965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.576995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.577388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-11-27 10:03:38.577419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-11-27 10:03:38.577730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.577758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.578102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.578133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.578537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.578567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.578810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.578838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.579183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.579217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.579591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.579619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.579902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.579931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.580288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.580321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.580679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.580708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.581000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.581029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.581397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.581428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.581825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.581856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.582214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.582243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.582606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.582637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.582996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.583027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.583407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.583439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.583780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.583809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.584152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.584192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.584573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.584601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.585044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.585075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.585472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.585503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.585749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.585778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.586142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.586193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.587058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.587093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.587342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.587380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.587758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.587789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.588152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.588198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.588587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.588616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.588959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.588990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.589401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.589434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.589805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.589835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.590185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-11-27 10:03:38.590217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-11-27 10:03:38.590571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.590603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.590967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.590997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.591454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.591487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.591860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.591892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.592287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.592318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.592638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.592671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.592912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.592947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.593300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.593330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.593713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.593742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.594105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.594134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.594539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.594567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.594902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.594930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.595345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.595374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.595765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.595795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.596057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.596089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.596440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.596471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.596820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.596851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.597232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.597264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.597631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.597662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.598020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.598051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.598468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.598500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.598926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.598964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.599288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.599320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.599555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.599588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.599975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.600007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.600453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.600486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.600841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.600872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.601038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.601070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.601438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.601468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-11-27 10:03:38.601848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-11-27 10:03:38.601880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.602245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.602275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.602653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.602688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.603061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.603092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.603456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.603487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.603817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.603847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.604233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.604264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.604633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.604661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.605044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.605073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.605498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.605528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.605882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.605911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.606268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.606297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.606671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.606699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.607042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.607070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.607439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.607467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.607796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.607824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.608197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.608226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.608609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.608639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.608974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.609001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.609394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.609430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.609856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.609885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.610290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.610320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.610671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.610701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.611069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.611099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.611481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.611510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.611871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.611901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.612248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.612278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.612714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.612742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.613086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-11-27 10:03:38.613115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-11-27 10:03:38.613497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.613529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.613779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.613807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.614174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.614204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.614572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.614600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.614966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.614996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.615371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.615401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.615764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.615792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.616152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.616190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.616426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.616460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.616719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.616747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.617106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.617134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.617489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.617521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.617764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.617792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.618140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.618180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.618543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.618570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.618929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.618957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.619331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.619362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.619731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.619759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.620107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.620137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.620514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.620544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.620790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.620822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.621194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.621225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.621612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.621641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.622005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.622034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.622475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.622505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.622861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.622891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.623138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.623176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.623604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.623632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.624015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.624044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.624396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.624426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.624791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-11-27 10:03:38.624821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-11-27 10:03:38.625186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.625217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.625635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.625664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.625908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.625935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.626195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.626228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.626561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.626589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.627006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.627034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.627388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.627418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.627672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.627701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.628060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.628089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.628456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.628486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.628844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.628871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.629231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.629261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.629522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.629551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.629929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.629958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.630323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.630353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.630594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.630626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.630989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.631017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.631397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.631435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.631771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.631799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.632146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.632187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.632599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.632629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.632988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.633016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.633395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.633425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.633781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-11-27 10:03:38.633809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-11-27 10:03:38.634180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.634212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.634591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.634620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.634958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.634985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.635279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.635315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.635674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.635703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.636057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.636085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.636455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.636485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.636860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.636888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.637261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.637290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.637649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.637677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.638038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.638065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.638420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.638450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.638870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.638899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.639262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.639292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.639647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.639676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.640005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.640034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.640387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.640416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.640768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.640799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.641174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.641205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.641578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.641607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.641986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.642015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.642357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.642387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.642739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.642767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.643145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.643183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.643450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.643478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.643825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.643854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.644235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.644266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.644462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.644493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.644852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.644882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.293 [2024-11-27 10:03:38.645132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.293 [2024-11-27 10:03:38.645171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.293 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.645538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.645574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.645915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.645945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.646308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.646339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.646686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.646714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.647077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.647106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.647460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.647490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.647850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.647878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.648112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.648143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.648523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.648553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.648923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.648952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.649320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.649352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.649754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.649784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.650028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.650060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.650408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.650439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.650792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.650821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.651186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.651217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.651600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.651630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.652008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.652039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.652409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.652438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.652805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.652834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.653197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.653227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.653581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.653618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.653980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.654009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.654273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.654303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.654687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.654716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.655076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.655105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 4072159 Killed "${NVMF_APP[@]}" "$@" 00:31:23.294 [2024-11-27 10:03:38.655486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.655518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 [2024-11-27 10:03:38.655949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.294 [2024-11-27 10:03:38.655981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.294 qpair failed and we were unable to recover it. 00:31:23.294 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:23.294 [2024-11-27 10:03:38.656335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.656367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:23.295 [2024-11-27 10:03:38.656722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.656751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:23.295 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:23.295 [2024-11-27 10:03:38.657122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.657151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.295 [2024-11-27 10:03:38.657522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.657551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.657914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.657943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.658292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.658324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.658725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.658755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.659119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.659148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.659544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.659575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.659815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.659846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.660224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.660261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.660626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.660655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.661019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.661050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.661466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.661496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.661835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.661866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.662233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.662265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.662633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.662663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.663106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.663135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.663539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.663569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.663923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.663951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.664301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.664331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.664754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.664783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.665152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.665203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.665560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.665590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 [2024-11-27 10:03:38.665959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.665994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4073171 00:31:23.295 [2024-11-27 10:03:38.666351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.666383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.295 qpair failed and we were unable to recover it. 00:31:23.295 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4073171 00:31:23.295 [2024-11-27 10:03:38.666637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.295 [2024-11-27 10:03:38.666671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:23.296 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4073171 ']' 00:31:23.296 [2024-11-27 10:03:38.666952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.666982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.296 [2024-11-27 10:03:38.667345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.667377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.296 [2024-11-27 10:03:38.667729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.667760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:23.296 10:03:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.296 [2024-11-27 10:03:38.668109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.668139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.668548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.668583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.668919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.668949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.669325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.669357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.669755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.669784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.670151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.670195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.670543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.670572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.670958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.671006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.671399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.671431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.671788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.671817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.672186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.672216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.672596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.672627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.673001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.673030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.673283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.673315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.673682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.673711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.673960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.673988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.674139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.674196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.674465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.674494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.674865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.674893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.675309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.675339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.675705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.675735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.296 [2024-11-27 10:03:38.676100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.296 [2024-11-27 10:03:38.676130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.296 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.676534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.676564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.676943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.676973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.677317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.677348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.677623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.677655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.678009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.678038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.678341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.678372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.678735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.678763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.679142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.679184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.679461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.679490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.679847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.679876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.680254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.680285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.680666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.680696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.681031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.681060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.681393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.681422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.681784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.681814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.682175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.682207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.682579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.682608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.682940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.682969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.683235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.683266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.683618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.683647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.684008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.684036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.684389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.684426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.684779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.684808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.685204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.685234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.685403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.685435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.685901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.685930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.686288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.686319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.686581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.686610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.297 [2024-11-27 10:03:38.686943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.297 [2024-11-27 10:03:38.686973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.297 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.687368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.687400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.687738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.687767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.688134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.688173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.688576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.688605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.688993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.689023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.689390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.689420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.689807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.689836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.690181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.690211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.690554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.690582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.690970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.690999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.691389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.691419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.691751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.691780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.692125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.692154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.692709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.692738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.693115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.693144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.693617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.693646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.694013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.694041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.694476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.694508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.694850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.694878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.695260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.695290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.695663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.695694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.695967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.298 [2024-11-27 10:03:38.695995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.298 qpair failed and we were unable to recover it. 00:31:23.298 [2024-11-27 10:03:38.696290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.696320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.696697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.696725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.697066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.697094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.697294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.697323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.697727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.697756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.698008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.698036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.698443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.698474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.698811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.698842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.699197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.699226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.699647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.699674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.699991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.700019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.700434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.700464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.700827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.700857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.701117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.701145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.701628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.701658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.701996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.702025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.702447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.702478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.702791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.702818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.703178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.703209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.703364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.703396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.703771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.703800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.704185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.704229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.704570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.704599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.704940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.704968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.705325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.705356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.705721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.705750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.706048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.706076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.706426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.706457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.706736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.299 [2024-11-27 10:03:38.706765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.299 qpair failed and we were unable to recover it. 00:31:23.299 [2024-11-27 10:03:38.707099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.707128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.707558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.707588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.707971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.708000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.708266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.708295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.708682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.708710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.708987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.709019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.709279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.709310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.709653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.709682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.710091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.710121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.710503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.710540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.710894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.710923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.711297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.711328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.711592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.711621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.711966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.711994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.712393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.712423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.712788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.712817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.713147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.713185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.713600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.713629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.713987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.714016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.714268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.714298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.714668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.714697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.714938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.714966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.715338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.715368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.715715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.715745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.716107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.716136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.716407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.716436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.716715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.716744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.717181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.717213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.300 [2024-11-27 10:03:38.717496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.300 [2024-11-27 10:03:38.717524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.300 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.717874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.717902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.718292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.718324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.718673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.718702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.719080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.719110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.719495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.719525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.719898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.719927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.720200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.720231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.720600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.720633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.720889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.720917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.721141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.721182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.721586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.721616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.721997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.722025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.722402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.722432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.722778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.722808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.722986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.723014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.723256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.723286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.723680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.723709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.723817] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:31:23.301 [2024-11-27 10:03:38.723888] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.301 [2024-11-27 10:03:38.724099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.724131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.724522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.724552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.724897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.724927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.725203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.725236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.725637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.725668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.726022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.726054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.726326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.726358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.726719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.726749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.727081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.727111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.301 [2024-11-27 10:03:38.727471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.301 [2024-11-27 10:03:38.727504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.301 qpair failed and we were unable to recover it. 00:31:23.575 [2024-11-27 10:03:38.727857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-11-27 10:03:38.727889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-11-27 10:03:38.728243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-11-27 10:03:38.728275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-11-27 10:03:38.728644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-11-27 10:03:38.728675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-11-27 10:03:38.729016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-11-27 10:03:38.729045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-11-27 10:03:38.729313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-11-27 10:03:38.729348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.729684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.729715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.730104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.730144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.730527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.730557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.730841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.730871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.731219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.731250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.731484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.731515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.731875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.731905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.732194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.732225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.732576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.732607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.732946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.732976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.733236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.733268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.733592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.733623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.733986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.734016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.734395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.734426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.734771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.734802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.735179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.735213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.735451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.735481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.735720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.735751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.736101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.736131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.736495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.736526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.736880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.736911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.737286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.737317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.737680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.737710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.737941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.737976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.738392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.738425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.738848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.738879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.739204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.739237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.739647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.739677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.740024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.740058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.740404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.740434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.740785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.740815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.741183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.741213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.741573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.741603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.741999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.742028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.742390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.742422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.742763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.742792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.743168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.743200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-11-27 10:03:38.743567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-11-27 10:03:38.743595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.743965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.743994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.744257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.744287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.744678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.744707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.745070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.745099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.745523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.745555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.745905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.745933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.746297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.746327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.746667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.746697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.747059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.747088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.747506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.747536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.747912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.747943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.748217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.748249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.748577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.748606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.748956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.748986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.749304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.749334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.749693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.749722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.749982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.750011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.750298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.750335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.750724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.750755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.751131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.751177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.751553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.751583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.751945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.751975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.752374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.752405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.752736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.752766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.753137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.753175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.753503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.753534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.753902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.753931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.754279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.754311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.754677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.754706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.755036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.755067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.755465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.755496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.755933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.755963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.756310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.756339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.756689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.756719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.757084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.757113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.757477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.757508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.757855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.757885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-11-27 10:03:38.758217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-11-27 10:03:38.758248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.758582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.758612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.758995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.759025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.759380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.759418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.759787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.759816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.760173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.760204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.760540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.760569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.760942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.760974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.761376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.761407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.761750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.761780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.762121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.762150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.762470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.762498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.762853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.762881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.763220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.763252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.763550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.763581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.763925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.763955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.764336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.764365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.764739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.764772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.765119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.765148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.765521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.765550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.765914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.765943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.766305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.766335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.766692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.766721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.767088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.767116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.767496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.767526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.767895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.767925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.768299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.768330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.768695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.768726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.769090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.769119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.769472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.769501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.769846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.769873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.770258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.770287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.770657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.770685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.771041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.771069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.771425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.771454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.771824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.771853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.772205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-11-27 10:03:38.772236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-11-27 10:03:38.772598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.772626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.772970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.772998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.773381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.773411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.773785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.773814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.774168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.774200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.774532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.774562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.774908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.774937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.775295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.775324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.775678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.775707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.776059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.776087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.776442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.776472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.776814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.776850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.777213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.777243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.777541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.777568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.777933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.777961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.778307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.778337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.778602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.778630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.779006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.779035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.779381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.779413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.779767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.779795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.780129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.780157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.780537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.780568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.780915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.780945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.781310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.781341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.781680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.781710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.782058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.782087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.782457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.782487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.782843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.782873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.783233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.783264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.783620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.783648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.784003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.784032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.784379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.784409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.784765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.784793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.785137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.785177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-11-27 10:03:38.785519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-11-27 10:03:38.785549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.785924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.785952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.786407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.786437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.786698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.786730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.787105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.787140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.787437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.787468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.787696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.787727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.788116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.788145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.788547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.788577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.788940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.788970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.789367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.789397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.789741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.789771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.790135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.790175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.790564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.790593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.790936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.790966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.791308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.791338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.791680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.791708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.792084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.792114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.792252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.792285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.792656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.792686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.792935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.792963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.793311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.793341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.793711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.793740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.794102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.794132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.794490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.794520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.794873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.794902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.795258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.795287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.795676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.795705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.796058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.796086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.796463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.796493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.796751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.796781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-11-27 10:03:38.797133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-11-27 10:03:38.797171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.797534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.797565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.797917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.797946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.798296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.798328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.798685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.798717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.799066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.799096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.799545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.799577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.799937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.799966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.800320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.800350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.800725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.800754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.801105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.801136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.801496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.801527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.801879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.801910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.802262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.802293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.802658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.802688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.803048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.803078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.803433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.803465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.803629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.803662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.804038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.804068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.804426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.804456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.804791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.804821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.805192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.805223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.805547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.805575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.805935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.805964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.806331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.806362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.806739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.806768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.807131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.807171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.807534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.807566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.807931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.807960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.808249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.808280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.808522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.808551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.808908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.808939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.809286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.809317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.809540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.809569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-11-27 10:03:38.809940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-11-27 10:03:38.809968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.810381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.810414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.810756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.810786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.811168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.811197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.811569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.811601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.811934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.811962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.812229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.812259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.812601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.812637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.812986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.813015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.813397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.813429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.813764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.813794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.814146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.814186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.814561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.814590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.814825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.814855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.815192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.815222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.815658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.815688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.816027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.816056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.816421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.816452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.816808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.816840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.817210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.817241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.817618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.817646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.817991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.818022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.818342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.818373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.818628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.818658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.819094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.819125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.819541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.819571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.819922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.819951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.820306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.820338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.820703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.820732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.821103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.821135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.821520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.821553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.821810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.821844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.822053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.822083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.822467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.822501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.822854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.822893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.823264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.823294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.823663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.823693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.824048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-11-27 10:03:38.824077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-11-27 10:03:38.824337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.824368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.824723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.824753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.825111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.825131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:23.583 [2024-11-27 10:03:38.825139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.825440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.825471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.825838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.825869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.826120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.826151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.826495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.826527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.826897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.826929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.827294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.827325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.827687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.827716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.828076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.828105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.828445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.828480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.828818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.828849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.829196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.829230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.829598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.829627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.829883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.829914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.830265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.830296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.830668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.830699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.830938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.830971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.831321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.831351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.831719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.831748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.832062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.832092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.832448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.832480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.832824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.832861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.833226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.833258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.833508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.833539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.833899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.833928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.834151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.834191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.834566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.834596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.834952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.834982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.835340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.835371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.835745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.835775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.836134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.836174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.836505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.836536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-11-27 10:03:38.836885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-11-27 10:03:38.836915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.837197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.837233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.837575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.837607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.837867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.837898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.838293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.838324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.838721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.838753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.839192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.839222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.839603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.839633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.839885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.839914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.840275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.840306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.840680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.840708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.841059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.841090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.841453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.841482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.841838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.841866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.842235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.842266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.842512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.842545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.842909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.842945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.843202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.843234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.843585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.843616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.843994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.844023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.844250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.844279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.844602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.844633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.844992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.845021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.845261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.845294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.845667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.845700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.845895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.845928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.846285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.846316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.846684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.846715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.846919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.846950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.847361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.847391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.847744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.847776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.848114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.848142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.848388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.848417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.848786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.848817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.849204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.849236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.849469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.849502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.849801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.849829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.850203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.850234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-11-27 10:03:38.850536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-11-27 10:03:38.850565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.850925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.850954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.851312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.851342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.851579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.851613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.851877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.851908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.852275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.852312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.852658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.852689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.852951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.852980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.853330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.853361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.853723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.853751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.854102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.854132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.854494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.854523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.854763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.854791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.855170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.855201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.855572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.855601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.855955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.855983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.856336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.856367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.856611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.856640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.856988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.857017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.857381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.857414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.857751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.857780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.858127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.858166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.858528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.858558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.858922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.858954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.859313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.859343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.859717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.859746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.860109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.860138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.860508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.860539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.860886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.860917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.861243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.861274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-11-27 10:03:38.861626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-11-27 10:03:38.861654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.862018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.862048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.862417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.862446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.862796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.862824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.863271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.863300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.863653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.863682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.864092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.864120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.864481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.864511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.864863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.864895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.865269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.865299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.865662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.865690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.866053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.866081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.866442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.866471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.866847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.866876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.867242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.867271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.867499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.867530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.867882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.867912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.868277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.868307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.868567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.868595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.868977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.869005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.869368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.869397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.869767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.869794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.870179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.870209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.870453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.870481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.870852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.870880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.871239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.871269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.871635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.871663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.872024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.872051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.872421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.872451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.872806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.872835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.873203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.873233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.873506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.873534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.873876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.873904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.874271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.874301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.874672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.874707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.875065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.875093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-11-27 10:03:38.875470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-11-27 10:03:38.875499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.875892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.875922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.876274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.876304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.876672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.876702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.877065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.877094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.877417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.877448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.877807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.877836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.878199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.878238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.878616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.878646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.878994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.879023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.879387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.879417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.879775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.879765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.587 [2024-11-27 10:03:38.879804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 wit[2024-11-27 10:03:38.879808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of eventsh addr=10.0.0.2, port=4420 00:31:23.587 at runtime. 00:31:23.587 [2024-11-27 10:03:38.879825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.879833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.587 [2024-11-27 10:03:38.879841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.587 [2024-11-27 10:03:38.880179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.880209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.880567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.880596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.880953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.880981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.881336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.881365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.881625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.881655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.881997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.882028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.882036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:23.587 [2024-11-27 10:03:38.882314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.882351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.882281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:23.587 [2024-11-27 10:03:38.882496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:23.587 [2024-11-27 10:03:38.882497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:23.587 [2024-11-27 10:03:38.882618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.882651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.883003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.883033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.883428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.883460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.883814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.883843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.884208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.884240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.884624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.884652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.884918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.884945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.885236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.885267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.885516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.885545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.885877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.885905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.886273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.886304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.886663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.886692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.887079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.887109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.887474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.887503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.887754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.887781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.888009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.888038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-11-27 10:03:38.888418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-11-27 10:03:38.888448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.888806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.888834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.889191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.889222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.889617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.889645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.890009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.890037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.890415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.890445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.890642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.890669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.890924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.890952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.891323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.891352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.891718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.891747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.892108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.892138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.892553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.892584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.892828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.892855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.893215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.893245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.893572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.893602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.893832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.893860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.894113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.894141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.894535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.894564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.894803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.894830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.895191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.895221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.895594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.895622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.895981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.896010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.896393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.896425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.896784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.896818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.897057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.897085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.897450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.897480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.897823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.897851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.898227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.898256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.898624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.898652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.899024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.899052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.899402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.899431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.899686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.899714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.899942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.899970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.900341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.900370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.900596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.900624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.900997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.901026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.901384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.901414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.901769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.901798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.902184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-11-27 10:03:38.902221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-11-27 10:03:38.902575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.902603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.902977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.903006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.903267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.903296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.903587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.903616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.903982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.904010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.904372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.904403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.904763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.904791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.905150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.905190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.905536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.905563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.905934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.905963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.906320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.906351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.906712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.906746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.907115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.907143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.907554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.907583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.907853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.907880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.908203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.908233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.908583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.908612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.908841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.908870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.909205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.909235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.909533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.909561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.909920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.909949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.910304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.910335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.910684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.910713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.911074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.911103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.911467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.911497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.911718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.911746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.912133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.912174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.912381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.912409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.912677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.912705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.913062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.913091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.913484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.913514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.913887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.913916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.914261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.914292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.914653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.914680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.915039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.915067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.915413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.915443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.915797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.915826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.916178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-11-27 10:03:38.916208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-11-27 10:03:38.916468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.916497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.916843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.916873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.917130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.917168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.917373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.917401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.917772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.917801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.918174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.918205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.918562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.918593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.918954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.918982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.919345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.919377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.919732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.919760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.920111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.920140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.920501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.920531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.920888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.920917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.921275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.921305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.921529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.921558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.921813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.921841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.922189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.922220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.922436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.922464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.922824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.922853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.923209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.923240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.923621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.923650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.924024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.924052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.924429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.924459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.924711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.924739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.924980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.925012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.925379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.925410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.925715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.925744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.926109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.926137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.926538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.926568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.926934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.926964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.927227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.927258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.927620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.927648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.928009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-11-27 10:03:38.928037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-11-27 10:03:38.928415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.928446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.928790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.928818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.929056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.929084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.929465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.929494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.929879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.929908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.930283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.930313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.930660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.930689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.931092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.931120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.931566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.931608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.931966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.931995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.932112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.932143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.932516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.932545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.932920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.932948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.933312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.933343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.933708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.933737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.934044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.934072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.934298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.934327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.934542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.934570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.934828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.934855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.935231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.935262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.935601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.935630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.936003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.936031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.936376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.936407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.936769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.936799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.937153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.937193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.937425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.937457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.937802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.937831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.938189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.938221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.938587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.938615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.938843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.938871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.939249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.939279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.939653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.939681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.940028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.940057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.940401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.940430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.940808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.940836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.941078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.941113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.941360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.941389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.941696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.941724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-11-27 10:03:38.942102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-11-27 10:03:38.942131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.942544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.942573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.942953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.942982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.943348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.943380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.943647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.943674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.944055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.944082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.944509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.944539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.944891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.944919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.945286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.945316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.945679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.945707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.945936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.945964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.946393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.946423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.946797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.946825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.947194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.947224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.947558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.947588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.947948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.947976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.948338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.948368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.948751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.948780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.949006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.949035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.949287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.949316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.949677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.949705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.950064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.950094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.950462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.950490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.950839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.950867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.951234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.951270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.951487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.951515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.951869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.951897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.952265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.952296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.952634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.952663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.953034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.953062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.953421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.953450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.953820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.953849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.954109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.954137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.954545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.954573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.954920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.954948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.955321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.955350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.955445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.955472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.955940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.956060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-11-27 10:03:38.956570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-11-27 10:03:38.956670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.957124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.957182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.957526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.957558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.957932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.957961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.958433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.958536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.958742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.958782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.959151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.959200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.959452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.959480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.959824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.959854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.960207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.960237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.960612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.960641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.961001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.961032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.961248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.961277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.961654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.961696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.962037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.962067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.962510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.962541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.962752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.962780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.963156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.963195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.963586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.963615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.963978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.964007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.964384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.964414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.964768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.964796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.965010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.965038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.965405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.965435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.965714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.965741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.966092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.966120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.966529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.966559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.966802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.966831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.967204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.967235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.967610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.967639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.967857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.967885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.968263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.968293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.968515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.968542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.968774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.968802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.969152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.969190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.969571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.969599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.969897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.969927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.970282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.970311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-11-27 10:03:38.970649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-11-27 10:03:38.970679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.971037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.971065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.971310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.971340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.971592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.971625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.972025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.972054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.972404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.972434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.972805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.972840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.973192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.973221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.973583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.973611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.973764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.973791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.974176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.974205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.974554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.974582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.974943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.974971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.975330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.975359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.975699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.975728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.976094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.976129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.976490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.976521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.976741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.976769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.977177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.977207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.977632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.977660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.978067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.978095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.978437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.978467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.978819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.978847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.979206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.979236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.979592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.979620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.979982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.980009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.980360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.980389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.980770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.980798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.981156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.981195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.981537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.981566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.981814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.981846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.981992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.982021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.982414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.982444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.982796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-11-27 10:03:38.982825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-11-27 10:03:38.983187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.983217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.983568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.983596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.984047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.984075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.984421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.984451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.984810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.984838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.985213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.985242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.985561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.985590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.985895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.985923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.986292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.986322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.986765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.986793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.987144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.987212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.987522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.987549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.987899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.987929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.988293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.988324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.988553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.988585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.988957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.988986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.989338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.989367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.989733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.989761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.990150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.990187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.990398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.990426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.990795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.990823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.991186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.991223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.991537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.991566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.991932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.991961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.992207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.992237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.992571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.992600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.992994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.993023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.993277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.993306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.993518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.993548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.993912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.993952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.994357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.994388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.994593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.994621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.994941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.994969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.995308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.995337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.995576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.995604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.995989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.996018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.996333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.996363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-11-27 10:03:38.996724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.595 [2024-11-27 10:03:38.996752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:38.996956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:38.996983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:38.997341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:38.997370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:38.997700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:38.997729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:38.998087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:38.998115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:38.998478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:38.998507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:38.998868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:38.998897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:38.999276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:38.999305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:38.999548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:38.999576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:38.999925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:38.999953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.000321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.000352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.000710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.000741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.001115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.001144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.001507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.001536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.001894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.001923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.002168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.002202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.002473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.002501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.002847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.002875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.003242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.003273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.003649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.003677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.004043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.004071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.004191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.004224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.004597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.004625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.005013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.005042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.005327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.005371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.005749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.005777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.005997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.006025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.006367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.006397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.006647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.006678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.006943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.006972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.007336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.007366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.007727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.007755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.007981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.008009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.008354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.008385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.008725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.008754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.009114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.596 [2024-11-27 10:03:39.009144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.596 qpair failed and we were unable to recover it. 00:31:23.596 [2024-11-27 10:03:39.009521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.009550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.009935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.009965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.010322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.010352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.010564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.010591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.010964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.010992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.011254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.011286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.011673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.011701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.012065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.012093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.012447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.012478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.012837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.012865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.013232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.013262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.013649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.013677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.013990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.014019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.014267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.014298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.014663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.014691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.015064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.015093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.015461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.015490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.015853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.015881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.016242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.016272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.016663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.016691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.016927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.016956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.017092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.017121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.017461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.017491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.017712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.017740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.017982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.018011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.018473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.018503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.018858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.018886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.019230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.019259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.019647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.019681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.020004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.020034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.020389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.020419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.020714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.020741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.021075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.021103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.021340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.021370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.021632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.021661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.021909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.021937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.022305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.597 [2024-11-27 10:03:39.022334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.597 qpair failed and we were unable to recover it. 00:31:23.597 [2024-11-27 10:03:39.022713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.022742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.023102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.023131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.023383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.023416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.023769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.023797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.024030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.024058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.024416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.024447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.024801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.024830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.025216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.025245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.025497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.025525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.025875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.025903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.026271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.026300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.026671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.026698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.027068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.027097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.027328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.027357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.027707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.027735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.028101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.028129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.028396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.028427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.598 [2024-11-27 10:03:39.028797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.598 [2024-11-27 10:03:39.028825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.598 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.029191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.029225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.029610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.029639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.030003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.030031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.030389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.030418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.030792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.030822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.031194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.031223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.031590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.031621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.031983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.032014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.032383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.032413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.032776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.032804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.033202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.033262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.033484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.033512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.033854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.033882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.034257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.034294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.034650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.034679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.035062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.035090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.035429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.035466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.035710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.035738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.036102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.036130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.036396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.036426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.036544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.036576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.036917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.036947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.037046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.037076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.037440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.037471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.037827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.037856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.038214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.038244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.038463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.038491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.038835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.038864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.039113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.039141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.039532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.039561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.039792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.039819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.040177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.040207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.040588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.894 [2024-11-27 10:03:39.040617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.894 qpair failed and we were unable to recover it. 00:31:23.894 [2024-11-27 10:03:39.040975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.041003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.041388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.041419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.041769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.041797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.042192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.042222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.042575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.042604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.042967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.042995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.043350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.043382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.043615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.043645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.043948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.043977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.044339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.044371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.044600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.044629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.044897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.044926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.045252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.045284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.045644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.045673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.046042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.046070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.046435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.046464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.046701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.046730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.047107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.047135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.047403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.047434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.047645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.047676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.047959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.047995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.048331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.048361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.048584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.048613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.048990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.049019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.049478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.049511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.049858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.049889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.050264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.050294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.050535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.050570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.050944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.050972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.051363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.051395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.051757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.051788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.052169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.052200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.052559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.052588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.052956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.052985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.053355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.053387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.053744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.053772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.053991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.054019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.054347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.054378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.054722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.054752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.055112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.055141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.055509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.055538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.055906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.055934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.056306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.056338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.056708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.056736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.057117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.057148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.895 qpair failed and we were unable to recover it. 00:31:23.895 [2024-11-27 10:03:39.057515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.895 [2024-11-27 10:03:39.057544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.057929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.057960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.058218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.058256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.058607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.058637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.058856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.058884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.059106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.059135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.059385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.059414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.059784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.059815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.060178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.060209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.060584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.060620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.060994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.061023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.061296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.061327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.061674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.061703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.062085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.062117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.062349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.062380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.062737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.062766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.063020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.063054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.063408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.063441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.063844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.063875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.064225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.064256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.064360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.064388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.064715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.064743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.065136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.065173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.065530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.065560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.065922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.065949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.066325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.066356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.066604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.066634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.066989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.067022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.067384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.067414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.067676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.067706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.067913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.067944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.068174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.068204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.068543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.068572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.068938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.068972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.069316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.069350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.069718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.069747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.070132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.070171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.070401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.070431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.070802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.070831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.071204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.071236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.071609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.071639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.072009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.072039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.072429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.072465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.072812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.072842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.072984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.073012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.073381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.073412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.073776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.073811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.074190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.074221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.074549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.074579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.074962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.074992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.075332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.075363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.075735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.075763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.076021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.076049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.076415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.076445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.076824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.076855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.077218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.077250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.077562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.077594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.077821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.077852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.078179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.078211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.078576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.078606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.078988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.079017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.079234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.079265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.079511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.079540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.079747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.079778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.080197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.080228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.080391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.080420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.080776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.080808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.081139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.896 [2024-11-27 10:03:39.081296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.896 qpair failed and we were unable to recover it. 00:31:23.896 [2024-11-27 10:03:39.081570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.081599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.081966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.081995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.082421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.082454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.082801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.082830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.083205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.083238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.083633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.083666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.083928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.083958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.084201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.084233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.084622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.084653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.085018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.085047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.085382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.085413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.085635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.085665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.086025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.086056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.086442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.086471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.086694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.086729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.087098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.087128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.087504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.087537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.087791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.087819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.088124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.088154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.088510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.088541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.088772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.088802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.089156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.089198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.089549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.089578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.090003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.090033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.090425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.090456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.090823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.090851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.091238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.091268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.091640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.091671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.092022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.092054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.092410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.092440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.092755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.092784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.093037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.093067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.093416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.093449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.093818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.093847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.094095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.094130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.094390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.094426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.094822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.094854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.095091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.095119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.095493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.095524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.095874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.095906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.096288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.096318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.096558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.096588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.096960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.096992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.097360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.097390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.097615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.097647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.098031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.098059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.098301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.098330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.098680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.098712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.099109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.099138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.099498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.099530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.099915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.099946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.100316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.100349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.100699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.100731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.101103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.101132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.101519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.101562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.101934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.101963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.102195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.102226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.897 qpair failed and we were unable to recover it. 00:31:23.897 [2024-11-27 10:03:39.102487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.897 [2024-11-27 10:03:39.102521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.102883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.102913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.103275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.103307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.103646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.103674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.103898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.103927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.104293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.104326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.104688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.104718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.105083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.105111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.105509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.105542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.105916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.105945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.106317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.106347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.106708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.106737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.107110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.107139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.107525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.107558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.107948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.107978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.108328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.108358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.108666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.108694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.108837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.108868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.109266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.109297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.109675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.109705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.110072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.110100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.110478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.110510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.110886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.110916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.111269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.111300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.111663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.111692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.111931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.111962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.112207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.112244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.112604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.112633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.113003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.113032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.113386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.113417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.113777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.113808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.114203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.114233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.114376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.114406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.114793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.114822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.115193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.115223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.115563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.115592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.115981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.116009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.116263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.116300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.116661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.116689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.117056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.117084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.117197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.117229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.117579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.117608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.117972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.118000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.118225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.118254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.118523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.118551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.118810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.118839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.119191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.119222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.119552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.119580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.119916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.119945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.120202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.120234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.120489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.120522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.120872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.120901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.121257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.121287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.121545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.898 [2024-11-27 10:03:39.121573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.898 qpair failed and we were unable to recover it. 00:31:23.898 [2024-11-27 10:03:39.121862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.121890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.122267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.122296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.122524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.122552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.122925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.122953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.123169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.123197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.123521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.123549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.123899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.123928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.124292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.124321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.124690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.124718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.124956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.124984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.125339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.125370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.125732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.125761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.125993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.126021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.126275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.126304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.126556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.126584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.126948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.126975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.127332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.127361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.127731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.127759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.128130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.128172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.128530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.128559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.128784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.128813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.129180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.129210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.129561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.129589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.129957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.129990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.130214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.130243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.130509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.130538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.130905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.130933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.131253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.131283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.131702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.131730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.132103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.132131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.132527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.132555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.132777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.132805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.133169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.133198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.133541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.133569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.133828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.133859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.134082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.134111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.134382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.134411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.134672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.134703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.135032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.135060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.135420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.135451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.135827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.135855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.136229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.899 [2024-11-27 10:03:39.136258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.899 qpair failed and we were unable to recover it. 00:31:23.899 [2024-11-27 10:03:39.136635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.136664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.137044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.137072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.137217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.137266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.137657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.137686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.138055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.138084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.138458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.138489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.138723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.138751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.139113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.139142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.139376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.139405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.139770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.139798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.140019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.140047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.140409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.140439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.140805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.140833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.141187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.141217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.141568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.141596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.141958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.141987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.142343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.142374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.142754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.142783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.143002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.143030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.143363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.143392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.143656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.143684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.143842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.143877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.144240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.144270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.144615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.144644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.145025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.145053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.145388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.145418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.145782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.145810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.146175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.146204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.146557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.146587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.146817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.146845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.147098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.147126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.147398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.147428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.147801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.147830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.148196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.148227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.148565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.148593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.148957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.148987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.149330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.149361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.149698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.149726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.149826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.149853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.150259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.150288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.150479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.150507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.150754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.150782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.151125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.151153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.151387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.151415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.151750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.151780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.152133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.152171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.900 [2024-11-27 10:03:39.152389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.900 [2024-11-27 10:03:39.152418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.900 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.152793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.152821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.153041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.153069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.153284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.153314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.153659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.153687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.154043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.154071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.154412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.154442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.154790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.154818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.155085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.155113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.155500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.155530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.155750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.155777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.156118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.156146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.156435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.156464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.156801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.156828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.157195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.157226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.157553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.157588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.157953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.157982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.158324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.158354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.158744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.158772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.159155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.159195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.159559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.159586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.159939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.159967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.160337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.160367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.160597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.160625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.161003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.161031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.161408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.161437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.161799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.161827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.161921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.161950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.162264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.162294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.162537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.162569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.162919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.162947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.163319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.163348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.163707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.163735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.163968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.163996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.164240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.164269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.164510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.164538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.164888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.164916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.165283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.165312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.165688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.165716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.165927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.165954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.166225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.166255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.166617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.166645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.166857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.166886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.167241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.167270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.167649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.167678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.168045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.168073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.168288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.168318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.168683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.168711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.168956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.168983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.169211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.169241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.169564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.169595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.169947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.169975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.170337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.170367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.170730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.170758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.171137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.171173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.171542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.171586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.171937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.171965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.172177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.172206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.901 [2024-11-27 10:03:39.172604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.901 [2024-11-27 10:03:39.172631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.901 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.173004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.173032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.173403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.173433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.173791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.173819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.174187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.174217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.174566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.174596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.174836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.174864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.175236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.175266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.175626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.175655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.175885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.175914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.176133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.176168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.176553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.176582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.176958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.176986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.177339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.177370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.177704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.177732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.178094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.178122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.178494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.178524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.178894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.178922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.179279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.179309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.179699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.179729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.180079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.180108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.180353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.180381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.180727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.180755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.181114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.181142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.181313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.181342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.181769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.181797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.182178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.182208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.182555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.182584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.182800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.182829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.183051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.183079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.183436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.183467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.183820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.183850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.184208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.184237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.184433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.184460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.184825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.184854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.185209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.185239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.185556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.185583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.185944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.185978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.186196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.186225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.186466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.186493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.186866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.186894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.187260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.187290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.187638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.187666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.188038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.188066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.188298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.188328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.188537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.188565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.188932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.188959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.189322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.189352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.189451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.189478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.190070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.190220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.190685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.190723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.191111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.191141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.191406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.191435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.191656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.191686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.192038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.192069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.192285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.192316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.192636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.192665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.193039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.193068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.193344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.193375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.193654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.193689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.194007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.194037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.194397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.194426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.194784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.194811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.195181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.195211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.195493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.195529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.195882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.195911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.196275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.196306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.196680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.196708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.196926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.196954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.197330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.197360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.197712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.197742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.198104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.198132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.902 [2024-11-27 10:03:39.198544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.902 [2024-11-27 10:03:39.198573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.902 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.198775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.198803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.199055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.199088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.199512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.199541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.199942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.199970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.200227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.200260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.200522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.200550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.200904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.200934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.201279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.201308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.201531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.201558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.201786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.201814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.202187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.202218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.202582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.202610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.202977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.203005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.203105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.203132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.203591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.203620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.203991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.204019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.204367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.204399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.204784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.204814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.205030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.205065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.205431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.205461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.205824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.205852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.206230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.206259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.206502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.206530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.206874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.206903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.207127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.207155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.207537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.207566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.207805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.207833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.208096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.208125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.208469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.208498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.208878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.208907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.209264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.209294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.209665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.209693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.209920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.209948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.210321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.210350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.210730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.210758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.210985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.211012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.211418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.211447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.211821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.211849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.212218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.212247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.212480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.212508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.212800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.212828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.213188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.213217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.213554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.213583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.213951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.213979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.214348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.214376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.214718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.214747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.215123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.215152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.215396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.215424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.215790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.215817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.216174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.216204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.216471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.216503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.216722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.216751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.216980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.217008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.217222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.217253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.217684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.217711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.218088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.218117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.218485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.218513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.218897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.218925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.219022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.219049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.219489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.219519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.219887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.219915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.220278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.220306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.220683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.220710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.220973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.221000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.221352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.221382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.221761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.221789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.222147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.222185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.222541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.222569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.222822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.222854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.223151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.223187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.223557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.223586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.223950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.223978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.224331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.224361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.224733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.224760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.225136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.225188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.225581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.225609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.225837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.225865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.226231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.903 [2024-11-27 10:03:39.226261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.903 qpair failed and we were unable to recover it. 00:31:23.903 [2024-11-27 10:03:39.226631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.226659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.227036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.227064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.227437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.227466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.227846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.227873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.228238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.228266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.228623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.228651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.229033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.229060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.229400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.229429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.229801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.229835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.230070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.230099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.230460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.230490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.230729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.230761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.231122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.231150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.231507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.231535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.231898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.231925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.232268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.232297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.232653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.232682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.233041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.233069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.233423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.233453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.233784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.233813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.234183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.234213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.234581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.234608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.234975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.235003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.235370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.235399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.235753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.235781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.236040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.236067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.236409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.236438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.236806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.236833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.237070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.237098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.237444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.237473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.237841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.237868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.238224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.238254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.238472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.238500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.238688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.238715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.239095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.239123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.239231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.239275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.239620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.239648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.239996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.240025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.240304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.240334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.240544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.240571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.240945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.240974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.241202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.241249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.241611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.241641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.242018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.242046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.242278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.242306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.242687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.242715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.243056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.243085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.243467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.243495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.243857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.243884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.244117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.244145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.244540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.244568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.244904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.244932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.245179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.245207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.245607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.245635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.245847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.245875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.246254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.246283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.246648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.246676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.247031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.247059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.247421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.247449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.247814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.247842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.248215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.248245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.248470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.248498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.248705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.248733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.249104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.249134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.249503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.249532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.249897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.249926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.250300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.250329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.250671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.250699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.251067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.251095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.251465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.251494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.251858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.251885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.252192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.252221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.252607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.252635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.252986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.253015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.253262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.253295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.253668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.253696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.253883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.253915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.254261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.254291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.254660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.254688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.254915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.254942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.255272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.255301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.255641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.255669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.255918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.255946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.256331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.256361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.256581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.256608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.256988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.257016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.257226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.257255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.257670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.257698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.258062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.904 [2024-11-27 10:03:39.258090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.904 qpair failed and we were unable to recover it. 00:31:23.904 [2024-11-27 10:03:39.258338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.258367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.258579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.258607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.258963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.258990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.259250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.259281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.259644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.259672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.260038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.260066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.260423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.260452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.260823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.260851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.261229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.261258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.261640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.261667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.262042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.262070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.262422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.262452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.262817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.262846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.263099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.263128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.263377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.263413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.263764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.263793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.264148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.264184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.264407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.264436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.264781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.264810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.265129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.265156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.265386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.265414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.265790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.265818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.266182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.266212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.266443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.266471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.266830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.266858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.267229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.267258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.267612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.267641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.267740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.267768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.268089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.268122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.268487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.268516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.268730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.268757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.269119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.269147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.269375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.269403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.269782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.269810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.270185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.270216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.270569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.270598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.270740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.270768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.271008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.271037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.271380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.271409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.271759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.271788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.272170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.272199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.272459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.272497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.272866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.272894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.273262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.273291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.273651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.273679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.274039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.274067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.274448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.274476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.274845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.274873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.275235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.275265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.275634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.275662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.276037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.276065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.276423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.276451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.276838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.276867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.277240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.277270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.277531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.277558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.277959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.277988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.278353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.278383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.278595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.278622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.279007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.279035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.279382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.279411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.279511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.279537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.280053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.280155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.280521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.280559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.280945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.280976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.281462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.281566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.282021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.282058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.282412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.282446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.282816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.282845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.283055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.283103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.283442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.283472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.283814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.283844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.284070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.284098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.284520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.284549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.284904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.284933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.285289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.285318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.285671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.285699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.285963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.286001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.286336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.286367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.286735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.286764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.287016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.287043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.287410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.287440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.287822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.287850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.288211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.905 [2024-11-27 10:03:39.288242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.905 qpair failed and we were unable to recover it. 00:31:23.905 [2024-11-27 10:03:39.288615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.288644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.288784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.288811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.289155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.289199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.289563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.289592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.289956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.289985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.290346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.290377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.290759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.290789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.291142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.291199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.291422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.291451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.291699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.291726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.292018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.292045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.292383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.292411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.292780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.292807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.293196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.293224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.293580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.293607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.293835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.293862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.294219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.294248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.294631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.294658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.295033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.295059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.295414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.295442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.295757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.295787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.296175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.296204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.296477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.296505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.296864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.296893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.297240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.297272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.297660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.297696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.298054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.298084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.298332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.298363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.298699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.298727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.299090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.299121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.299508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.299538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.299755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.299783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.300170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.300202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.300520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.300550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.300918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.300948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.301297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.301329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.301695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.301725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.302096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.302125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.302504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.302535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.302903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.302934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.303168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.303200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.303468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.303499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.303846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.303875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.304237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.304269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.304636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.304665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.305026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.305057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.305416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.305447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.305816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.305846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.306203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.306233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.306595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.306627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.306978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.307008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.307371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.307402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.307765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.307795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.308174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.308205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.308430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.308460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.308827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.308856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.309216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.309247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.309600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.309629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.309993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.310022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.310254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.310285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.310660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.310688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.311062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.311091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.311463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.311493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.311710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.311738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.312116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.312145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.312575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.312616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.313003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.313033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.313283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.313315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.313659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.313689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.314057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.314085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.314440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.314472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.314852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.314880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.315247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.315277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.315373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.315401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.315730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.315759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.315992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.316020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.316377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.316405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.316776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.316806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.317176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.317206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.317565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.317595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.317947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.317977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.318205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.318237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.318601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.318630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.318976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.319005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.319370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.319399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.319765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.319794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.320049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.320080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.320476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.320507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.906 qpair failed and we were unable to recover it. 00:31:23.906 [2024-11-27 10:03:39.320881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.906 [2024-11-27 10:03:39.320910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:23.907 [2024-11-27 10:03:39.321196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.907 [2024-11-27 10:03:39.321233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:23.907 [2024-11-27 10:03:39.321475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.907 [2024-11-27 10:03:39.321505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:23.907 [2024-11-27 10:03:39.321941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.907 [2024-11-27 10:03:39.321970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:23.907 [2024-11-27 10:03:39.322319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.907 [2024-11-27 10:03:39.322353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:23.907 [2024-11-27 10:03:39.322723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.907 [2024-11-27 10:03:39.322752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:23.907 [2024-11-27 10:03:39.323150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.907 [2024-11-27 10:03:39.323188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:23.907 [2024-11-27 10:03:39.323551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.907 [2024-11-27 10:03:39.323581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:23.907 [2024-11-27 10:03:39.323930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.907 [2024-11-27 10:03:39.323958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:23.907 [2024-11-27 10:03:39.324312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.907 [2024-11-27 10:03:39.324343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:23.907 [2024-11-27 10:03:39.324562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.907 [2024-11-27 10:03:39.324593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:23.907 [2024-11-27 10:03:39.324829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.907 [2024-11-27 10:03:39.324864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:23.907 [2024-11-27 10:03:39.325095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.907 [2024-11-27 10:03:39.325124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:23.907 [2024-11-27 10:03:39.325503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.907 [2024-11-27 10:03:39.325535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:23.907 qpair failed and we were unable to recover it. 00:31:24.180 [2024-11-27 10:03:39.325895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.325930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.326282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.326313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.326687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.326717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.327096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.327134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.327492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.327524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.327878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.327908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.328237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.328267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.328650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.328679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.329012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.329043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.329415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.329447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.329790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.329819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.330190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.330220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.330558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.330589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.330957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.330985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.331352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.331385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.331676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.331705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.332071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.332102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.332456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.332487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.332850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.332878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.333251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.333281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.333642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.333681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.334027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.334058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.334464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.334495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.334726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.334756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.334856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.334887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.335278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.335387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.335757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.335798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.336155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.181 [2024-11-27 10:03:39.336217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.181 qpair failed and we were unable to recover it. 00:31:24.181 [2024-11-27 10:03:39.336591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.336698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.337188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.337230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.337608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.337652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.338003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.338033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.338482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.338588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.339052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.339091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.339486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.339519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.339870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.339902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.340251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.340284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.340974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.341013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.341241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.341276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.341650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.341681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.341914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.341947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.342298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.342329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.342689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.342719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.343077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.343105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.343380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.343412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.343742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.343773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.344117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.344147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.344562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.344593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.344819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.344850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.345203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.345255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.345639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.345667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.346082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.346111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.346357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.346388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.346667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.346702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.346955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.346986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.347355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.347390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.347780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.182 [2024-11-27 10:03:39.347811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.182 qpair failed and we were unable to recover it. 00:31:24.182 [2024-11-27 10:03:39.348169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.348211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.348567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.348599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.348813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.348843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.349247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.349277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.349681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.349713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.350076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.350107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.350454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.350488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.350856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.350886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.351235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.351266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.351596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.351626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.351978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.352006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.352375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.352407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.352763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.352793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.353050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.353080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.353508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.353540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.353941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.353974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.354191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.354223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.354601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.354632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.355008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.355039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.355393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.355426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.355679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.355708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.356071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.356101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.356329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.356360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.356723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.356752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.357121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.357149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.357518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.357550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.357930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.357960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.358325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.358363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.183 [2024-11-27 10:03:39.358511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.183 [2024-11-27 10:03:39.358540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.183 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.358895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.358927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.359285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.359317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.359686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.359718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.360094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.360123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.360570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.360601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.360953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.360983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.361357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.361386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.361724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.361754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.361849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.361878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Write completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Write completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Write completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Write completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Write completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Write completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Write completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Write completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 Read completed with error (sct=0, sc=8) 00:31:24.184 starting I/O failed 00:31:24.184 [2024-11-27 10:03:39.362708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:24.184 [2024-11-27 10:03:39.363179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.363251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.363614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.363645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.364012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.364044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.364572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.364677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.364971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.365016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.365324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.365357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.365709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.365738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.366099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.366128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.184 [2024-11-27 10:03:39.366389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.184 [2024-11-27 10:03:39.366419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.184 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.366825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.366866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.367203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.367240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.367626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.367655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.367922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.367952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.368311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.368341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.368724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.368754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.368980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.369008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.369385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.369416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.369761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.369791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.370183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.370214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.370565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.370594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.370803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.370832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.371190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.371220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.371600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.371628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.371967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.371996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.372345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.372375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.372640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.372669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.373073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.373102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.373526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.373556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.373788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.373816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.374097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.374132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.374523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.374553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.374902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.374931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.375310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.375342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.375716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.375745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.376126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.376154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.376536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.376565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.185 qpair failed and we were unable to recover it. 00:31:24.185 [2024-11-27 10:03:39.376927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.185 [2024-11-27 10:03:39.376956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.377331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.377360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.377722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.377751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.377980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.378009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.378292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.378323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.378564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.378597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.378845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.378875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.379235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.379265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.379586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.379616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.379852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.379880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.380263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.380295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.380651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.380680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.380901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.380933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.381071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.381107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.381506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.381537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.381744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.381773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.382166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.382198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.382571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.382600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.382867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.382897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.383253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.383284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.383654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.383682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.383909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.383937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.384330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.384361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.384751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.384779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.385130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.385168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.385511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.385540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.385907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.385935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.386305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.186 [2024-11-27 10:03:39.386336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.186 qpair failed and we were unable to recover it. 00:31:24.186 [2024-11-27 10:03:39.386702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.386730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.387086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.387114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.387470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.387500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.387717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.387744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.388088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.388116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.388500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.388530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.388903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.388932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.389289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.389320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.389658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.389686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.390064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.390093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.390467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.390505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.390870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.390899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.391123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.391154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.391557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.391587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.391923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.391952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.392314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.392344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.392722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.392750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.393105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.393134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.393355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.393384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.393749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.393779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.394141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.394192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.394432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.394460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.394719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.187 [2024-11-27 10:03:39.394747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.187 qpair failed and we were unable to recover it. 00:31:24.187 [2024-11-27 10:03:39.395105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.395133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.395514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.395544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.395895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.395930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.396185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.396216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.396588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.396617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.396967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.396995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.397346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.397375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.397610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.397640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.397991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.398020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.398380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.398411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.398777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.398806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.399183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.399213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.399438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.399467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.399803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.399832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.400216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.400246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.400571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.400600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.400822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.400850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.401239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.401269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.401636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.401664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.402027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.402056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.402437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.402467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.402828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.402856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.403258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.403288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.403646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.403676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.403907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.403937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.404202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.404230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.404462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.404492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.404868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.404897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.405117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.188 [2024-11-27 10:03:39.405146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.188 qpair failed and we were unable to recover it. 00:31:24.188 [2024-11-27 10:03:39.405504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.405535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.405661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.405690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.406035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.406065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.406412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.406443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.406664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.406692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.407068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.407097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.407480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.407510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.407863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.407892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.408257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.408287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.408673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.408702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.408920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.408949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.409342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.409372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.409586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.409615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.409926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.409961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.410308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.410339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.410711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.410740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.410964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.410993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.411352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.411382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.411777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.411807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.412038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.412066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.412440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.412470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.412823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.412852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.413068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.413096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.413306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.413335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.413700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.413729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.414103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.414131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.414548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.414577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.414927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.414958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.415330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.189 [2024-11-27 10:03:39.415359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.189 qpair failed and we were unable to recover it. 00:31:24.189 [2024-11-27 10:03:39.415729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.415758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.416090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.416118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.416388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.416417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.416766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.416795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.417016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.417045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.417396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.417426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.417682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.417710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.418051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.418078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.418431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.418463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.418671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.418700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.418927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.418959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.419205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.419242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.419484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.419516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.419875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.419905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.420128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.420157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.420550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.420580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.420932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.420961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.421326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.421355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.421722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.421751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.422118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.422147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.422528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.422558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.422922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.422952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.423324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.423354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.423708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.423736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.424110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.424140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.424366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.424395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.424758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.190 [2024-11-27 10:03:39.424787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.190 qpair failed and we were unable to recover it. 00:31:24.190 [2024-11-27 10:03:39.425151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.425189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.425554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.425583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.425944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.425973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.426339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.426369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.426736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.426764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.426983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.427012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.427420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.427450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.427820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.427849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.428215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.428246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.428600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.428629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.428992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.429020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.429397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.429427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.429793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.429823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.430177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.430206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.430573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.430602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.430958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.430987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.431201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.431231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.431592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.431620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.431874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.431904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.432258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.432287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.432671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.432700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.433051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.433080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.433431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.433460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.433820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.433850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.434212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.434254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.434475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.434502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.434879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.434907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.435275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.435304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.435664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.191 [2024-11-27 10:03:39.435692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.191 qpair failed and we were unable to recover it. 00:31:24.191 [2024-11-27 10:03:39.435933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.435961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.436369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.436398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.436642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.436670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.437033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.437062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.437421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.437450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.437698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.437726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.438090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.438119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.438540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.438569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.438931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.438959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.439327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.439357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.439729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.439756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.440119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.440146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.440517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.440547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.440817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.440845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.441195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.441224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.441480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.441508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.441797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.441826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.442189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.442218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.442469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.442496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.442850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.442878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.443094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.443122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.443534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.443564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.443920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.443949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.444310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.444340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.444706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.444737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.445127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.445155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.445518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.445547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.445774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.445801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.192 [2024-11-27 10:03:39.446026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.192 [2024-11-27 10:03:39.446053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.192 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.446285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.446318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.446603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.446631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.446965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.446994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.447340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.447370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.447586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.447614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.447971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.447998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.448258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.448298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.448654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.448683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.448779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.448807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.449251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.449380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.449837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.449875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.450266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.450302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.450649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.450682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.451033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.451064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.451441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.451474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.451695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.451724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.452089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.452120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.452616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.452723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.453182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.453222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.453625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.453665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.453936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.453974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.454200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.454235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.454618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.454648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.454856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.454884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.193 qpair failed and we were unable to recover it. 00:31:24.193 [2024-11-27 10:03:39.455109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.193 [2024-11-27 10:03:39.455139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.455541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.455572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.456017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.456048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.456416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.456448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.456723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.456758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.456971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.457001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.457392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.457422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.457782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.457812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.458176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.458212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.458565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.458612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.458850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.458879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.459249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.459280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.459653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.459684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.460114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.460143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.460501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.460533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.460896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.460926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.461167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.461198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.461570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.461600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.461960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.461989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.462343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.462372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.462723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.462752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.463111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.463140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.463521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.463552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.463922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.463951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.464313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.464344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.464720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.464751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.465109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.465138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.465501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.465531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.465894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.465923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.466296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.466327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.194 qpair failed and we were unable to recover it. 00:31:24.194 [2024-11-27 10:03:39.466696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.194 [2024-11-27 10:03:39.466725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.467083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.467112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.467498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.467529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.467886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.467916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.468277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.468307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.468693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.468722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.469083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.469119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.469477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.469509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.469859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.469889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.470238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.470271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.470523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.470551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.470922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.470952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.471296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.471327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.471697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.471726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.472097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.472126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.472512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.472543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.472796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.472825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.473196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.473229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.473595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.473625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.474032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.474060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.474395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.474427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.474794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.474822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.475093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.475121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.475331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.475362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.475624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.475652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.476015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.476044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.476398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.476429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.476783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.476813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.477215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.477246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.477473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.477502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.195 qpair failed and we were unable to recover it. 00:31:24.195 [2024-11-27 10:03:39.477886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.195 [2024-11-27 10:03:39.477916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.478328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.478358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.478723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.478753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.479116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.479144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.479507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.479539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.479896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.479927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.480194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.480224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.480470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.480501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.480880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.480909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.481128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.481156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.481400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.481429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.481633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.481661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.481986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.482016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.482410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.482440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.482761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.482791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.483174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.483205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.483525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.483556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.483922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.483951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.484201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.484238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.484635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.484665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.485016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.485046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.485305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.485336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.485686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.485715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.485936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.485964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.486300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.486332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.486723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.486752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.486968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.486997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.487395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.487425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.487671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.487700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.488055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.488083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.196 [2024-11-27 10:03:39.488430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.196 [2024-11-27 10:03:39.488459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.196 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.488825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.488854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.489110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.489143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.489255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.489285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115b0c0 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.489341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1150e00 (9): Bad file descriptor 00:31:24.197 [2024-11-27 10:03:39.490005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.490109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.490590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.490695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.491019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.491059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.491544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.491647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.491939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.491978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.492348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.492380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.492611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.492642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.493002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.493031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.493270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.493304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.493662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.493692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.494078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.494109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.494448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.494478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.494702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.494730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.495071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.495101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.495449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.495481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.495852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.495882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.496244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.496277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.496614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.496644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.497011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.497040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.497381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.497413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.497742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.497771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.498117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.498146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.498545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.498575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.498923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.498955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.499181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.499212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.197 [2024-11-27 10:03:39.499491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.197 [2024-11-27 10:03:39.499524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.197 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.499869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.499898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.500272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.500305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.500674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.500705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.500931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.500960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.501338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.501370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.501597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.501627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.502019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.502048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.502430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.502461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.502830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.502861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.503215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.503246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.503607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.503645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.503858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.503888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.504103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.504133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.504501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.504531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.504911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.504941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.505202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.505235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.505462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.505492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.505867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.505897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.506267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.506298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.506548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.506577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.506940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.506968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.507229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.507259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.507615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.507644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.508020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.508049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.508393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.508425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.508666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.508696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.508988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.509018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.509406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.509439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.509719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.198 [2024-11-27 10:03:39.509748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.198 qpair failed and we were unable to recover it. 00:31:24.198 [2024-11-27 10:03:39.510116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.510146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.510498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.510529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.510893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.510922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.511283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.511313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.511673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.511701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.512063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.512092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.512468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.512497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.512859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.512887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.513077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.513106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.513354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.513384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.513757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.513785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.514181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.514212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.514458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.514486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.514854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.514882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.515258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.515287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.515662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.515691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.516052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.516079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.516460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.516489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.516843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.516872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.517249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.517277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.517628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.517656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.518020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.518064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.518307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.518337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.199 [2024-11-27 10:03:39.518725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.199 [2024-11-27 10:03:39.518753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.199 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.519115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.519143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.519522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.519550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.519916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.519945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.520314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.520345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.520717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.520745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.520972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.521001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.521126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.521154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.521387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.521416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.521789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.521819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.522184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.522216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.522476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.522505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.522720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.522749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.523116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.523145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.523528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.523557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.523911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.523940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.524288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.524318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.524662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.524691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.525080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.525108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.525488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.525518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.525811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.525839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.526082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.526110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.526490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.526520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.526881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.526911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.527273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.527302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.527678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.527707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.528075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.528106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.528484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.528513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.528745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.528773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.529142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.200 [2024-11-27 10:03:39.529179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.200 qpair failed and we were unable to recover it. 00:31:24.200 [2024-11-27 10:03:39.529524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.529553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.529819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.529847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.530195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.530225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.530574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.530603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.530868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.530896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.531295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.531325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.531558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.531585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.531939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.531967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.532322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.532360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.532583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.532612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.533005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.533033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.533391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.533420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.533792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.533821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.534200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.534229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.534561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.534590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.534821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.534850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.535094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.535122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.535542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.535572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.535933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.535961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.536197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.536227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.536436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.536465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.536830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.536858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.537223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.537253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.537575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.537603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.537832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.537860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.538319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.538349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.538727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.538756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.539122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.539152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.539524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.539553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.201 [2024-11-27 10:03:39.539924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.201 [2024-11-27 10:03:39.539953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.201 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.540332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.540363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.540527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.540555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.540933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.540961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.541301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.541332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.541704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.541733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.542097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.542126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.542532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.542562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.542927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.542957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.543102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.543130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.543470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.543499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.543760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.543789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.544088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.544116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.544497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.544527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.544766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.544794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.545196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.545226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.545444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.545472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.545830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.545858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.546223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.546253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.546499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.546534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.546821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.546849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.547083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.547113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.547508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.547538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.547888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.547917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.548131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.548170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.548404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.548433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.548751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.548780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.549157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.549199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.549584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.549612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.549985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.550014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.550132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.202 [2024-11-27 10:03:39.550169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.202 qpair failed and we were unable to recover it. 00:31:24.202 [2024-11-27 10:03:39.550520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.550550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.550911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.550940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.551201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.551237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:24.203 [2024-11-27 10:03:39.551621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.551653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:24.203 [2024-11-27 10:03:39.551995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.552026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:24.203 [2024-11-27 10:03:39.552386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.552423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:24.203 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:24.203 [2024-11-27 10:03:39.552792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.552823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.553198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.553228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.553437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.553465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.553840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.553870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.554197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.554227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.554474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.554503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.554742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.554770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.555016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.555046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.555296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.555325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.555702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.555732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.555952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.555981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.556322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.556356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.556740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.556772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.557127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.557169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.557438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.557474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.557697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.557727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.558090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.558121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.558348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.558379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.558727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.558758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.558859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.558888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.559239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.559278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.559651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.559682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.559924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.559952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.560213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.560246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.203 qpair failed and we were unable to recover it. 00:31:24.203 [2024-11-27 10:03:39.560638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.203 [2024-11-27 10:03:39.560668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.561050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.561082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.561445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.561477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.561700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.561729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.562141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.562181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.562555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.562584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.562833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.562862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.563237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.563267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.563524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.563560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.563853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.563881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.564127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.564178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.564446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.564476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.564701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.564730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.564964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.564992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.565344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.565378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.565745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.565782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.566143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.566181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.566433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.566467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.566852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.566883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.566997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.567030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.567369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.567399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.567781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.567810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.568168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.568199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.568569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.568601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.568982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.569012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.569384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.569416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.569651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.569679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.570020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.570050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.204 [2024-11-27 10:03:39.570406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.204 [2024-11-27 10:03:39.570437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.204 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.570813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.570843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.571198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.571232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.571618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.571651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.571996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.572034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.572384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.572416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.572781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.572810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.573179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.573210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.573574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.573612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.573847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.573878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.574096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.574125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.574405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.574435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.574817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.574850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.575265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.575297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.575665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.575695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.576068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.576099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.576484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.576514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.576890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.576922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.577148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.577186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.577546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.577577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.577937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.577970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.578342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.578373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.578768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.578799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.579191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.579224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.579582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.579611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.579972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.580002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.580250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.580280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.580501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.580529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.580762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.580791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.581044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.581075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.581328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.581358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.581752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.581780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.582180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.582210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.582596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.582624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.583004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.583034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.205 qpair failed and we were unable to recover it. 00:31:24.205 [2024-11-27 10:03:39.583369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.205 [2024-11-27 10:03:39.583401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.583727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.583758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.584082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.584111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.584338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.584369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.584747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.584778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.585020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.585049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.585414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.585447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.585835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.585866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.586221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.586251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.586485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.586514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.586886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.586916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.587146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.587184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.587576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.587607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.587968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.588006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.588229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.588260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.588635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.588664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.588874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.588905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.589267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.589296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.589655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.589685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.590009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.590038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.590141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.590178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.590417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.590449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.590640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.590670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.591033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.591063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.591290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.591322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.591688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.591721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.592072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.592102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.592351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.592388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.592758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.592787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.593176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.593207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.593556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.593586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 [2024-11-27 10:03:39.593977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.594007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.206 [2024-11-27 10:03:39.594381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.594413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:24.206 [2024-11-27 10:03:39.594750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.206 [2024-11-27 10:03:39.594781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.206 qpair failed and we were unable to recover it. 00:31:24.206 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.207 [2024-11-27 10:03:39.595104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.595135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:24.207 [2024-11-27 10:03:39.595545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.595578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.595925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.595955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.596276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.596308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.596681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.596710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.597065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.597093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.597433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.597464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.597820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.597848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.598215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.598247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.598631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.598661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.598876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.598904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.599318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.599350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.599697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.599728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.600083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.600114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.600403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.600437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.600799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.600830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.601197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.601228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.601606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.601642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.602015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.602045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.602268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.602299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.602632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.602662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.602924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.602956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.603295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.603325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.603709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.603739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.603977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.604009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.604403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.604437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.604771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.604801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.605066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.605101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.605375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.605408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.605600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.605635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.605861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.605893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.606329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.606362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.606581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.606610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.606967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.606995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.607208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.207 [2024-11-27 10:03:39.607238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.207 qpair failed and we were unable to recover it. 00:31:24.207 [2024-11-27 10:03:39.607608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.607637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.607972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.608003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.608417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.608448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.608799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.608830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.609184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.609216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.609546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.609574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.609959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.609988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.610336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.610366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.610473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.610504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.610810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.610840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.611090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.611118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.611463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.611493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.611909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.611937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.612304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.612335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.612692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.612720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.613095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.613125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.613490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.613520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.613745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.613773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.614020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.614052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.614399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.614431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.614771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.614802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.615144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.615181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.615531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.615568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.615928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.615957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.616327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.616359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.616703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.616740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.617122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.617151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.617526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.617556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.617918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.617946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.618327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.618358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.618608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.618640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.618993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.619024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.619386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.619417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.619636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.208 [2024-11-27 10:03:39.619664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.208 qpair failed and we were unable to recover it. 00:31:24.208 [2024-11-27 10:03:39.619912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.619943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.620150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.620187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.620572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.620603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.620981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.621010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.621392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.621422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.621777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.621807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.622176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.622205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.622614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.622642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.622988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.623017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.623394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.623424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.623789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.623818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.624193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.624225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.624566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.624595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.624955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.624984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.625380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.625410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.625758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.625788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.626152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.626190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.626552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.626580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.626850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.626878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.627232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.627263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.627662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.627691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.627912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.627940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.628306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.628336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.628713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.628742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.629084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.629115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.629482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.629512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.629774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.629802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.630173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.630203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.630570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.630607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.630863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.630892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.631248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.631278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.209 [2024-11-27 10:03:39.631540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.209 [2024-11-27 10:03:39.631568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.209 qpair failed and we were unable to recover it. 00:31:24.210 [2024-11-27 10:03:39.631922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.210 [2024-11-27 10:03:39.631951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.210 qpair failed and we were unable to recover it. 00:31:24.210 [2024-11-27 10:03:39.632312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.210 [2024-11-27 10:03:39.632342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.210 qpair failed and we were unable to recover it. 00:31:24.210 [2024-11-27 10:03:39.632608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.210 [2024-11-27 10:03:39.632636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.210 qpair failed and we were unable to recover it. 00:31:24.210 [2024-11-27 10:03:39.633019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.210 [2024-11-27 10:03:39.633048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.210 qpair failed and we were unable to recover it. 00:31:24.210 [2024-11-27 10:03:39.633263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.210 [2024-11-27 10:03:39.633292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.210 qpair failed and we were unable to recover it. 00:31:24.210 [2024-11-27 10:03:39.633643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.210 [2024-11-27 10:03:39.633673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.210 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.634013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.634044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.634386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.634418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.634771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.634800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.635250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.635280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 Malloc0 00:31:24.474 [2024-11-27 10:03:39.635702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.635731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.635959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.635988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.636273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.636303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.474 [2024-11-27 10:03:39.636678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.636709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:24.474 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.474 [2024-11-27 10:03:39.637061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.637090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:24.474 [2024-11-27 10:03:39.637464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.637495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.637947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.637976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.638250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.638278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.638524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.638554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.638929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.638959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.639182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.639213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.639551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.639587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.639752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.639782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.640179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.640211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.640573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.640601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.640826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.640854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.641216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.641245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.641659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.641687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.642053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.642081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.642511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.642540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.642751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.474 [2024-11-27 10:03:39.642906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.642936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.643186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.643217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.643484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.643515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.643905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.643934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.644371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.644408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.474 [2024-11-27 10:03:39.644658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.474 [2024-11-27 10:03:39.644686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.474 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.645028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.645057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.645469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.645499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.645882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.645910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.646171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.646201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.646577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.646606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.646971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.647000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.647365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.647395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.647763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.647790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.648027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.648054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.648414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.648443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.648818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.648846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.649207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.649238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.649368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.649397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.649760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.649788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.650118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.650147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.650529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.650557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.650939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.650969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.651324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.651354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.651745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.651773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.475 [2024-11-27 10:03:39.652143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.652192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:24.475 [2024-11-27 10:03:39.652537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.475 [2024-11-27 10:03:39.652567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:24.475 [2024-11-27 10:03:39.652936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.652964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.653373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.653401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.653775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.653809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.654156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.654193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.654572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.654600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.654834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.654862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.655325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.655355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.655734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.655762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.656141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.656185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.656548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.656576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.656932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.656960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.657320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.657349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.657722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.657750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.658121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.658149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.658571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.475 [2024-11-27 10:03:39.658599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.475 qpair failed and we were unable to recover it. 00:31:24.475 [2024-11-27 10:03:39.658966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.658995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.659142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.659183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.659539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.659567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.659938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.659966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.660340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.660369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.660756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.660784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.660899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.660927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.661289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.661318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.661609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.661637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.661877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.661909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.662175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.662204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.662436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.662464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.662714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.662743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.663116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.663144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.663559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.663589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.476 [2024-11-27 10:03:39.663962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.663991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.664247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.664279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.664515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.664546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.476 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:24.476 [2024-11-27 10:03:39.664883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.664911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.665277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.665307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.665541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.665568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.665856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.665895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.666260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.666290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.666676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.666705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.667058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.667088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.667455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.667492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.667871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.667900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.668239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.668270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.668623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.668652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.668966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.668994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.669230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.669260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.669494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.669523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.669880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.669908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.670284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.670314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.670664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.670693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.671028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.671056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.476 qpair failed and we were unable to recover it. 00:31:24.476 [2024-11-27 10:03:39.671411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.476 [2024-11-27 10:03:39.671441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.671787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.671817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.672072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.672100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.672370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.672404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.672760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.672790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.673152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.673202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.673477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.673508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.673754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.673783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.674138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.674176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.674543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.674573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.674973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.675002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.675255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.675286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.675699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.675729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.477 [2024-11-27 10:03:39.676113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.676143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:24.477 [2024-11-27 10:03:39.676489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.676521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.477 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:24.477 [2024-11-27 10:03:39.676849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.676880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.677250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.677282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.677661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.677690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.678049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.678077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.678454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.678485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.678861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.678891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.679240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.679271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.679660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.679690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.680050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.680078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.680420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.680450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.680826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.680855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.681205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.681234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.681604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.681645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.681991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.682020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.682136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.682173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.682590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.682619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.682970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.477 [2024-11-27 10:03:39.682998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa220000b90 with addr=10.0.0.2, port=4420 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.477 [2024-11-27 10:03:39.683200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.477 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.477 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:24.477 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.477 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:24.477 [2024-11-27 10:03:39.694097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.477 [2024-11-27 10:03:39.694255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.477 [2024-11-27 10:03:39.694307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.477 [2024-11-27 10:03:39.694331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.477 [2024-11-27 10:03:39.694351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.477 [2024-11-27 10:03:39.694407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.477 qpair failed and we were unable to recover it. 00:31:24.478 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.478 10:03:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 4072323 00:31:24.478 [2024-11-27 10:03:39.703956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.478 [2024-11-27 10:03:39.704057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.478 [2024-11-27 10:03:39.704086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.478 [2024-11-27 10:03:39.704101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.478 [2024-11-27 10:03:39.704114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.478 [2024-11-27 10:03:39.704145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.478 qpair failed and we were unable to recover it. 00:31:24.478 [2024-11-27 10:03:39.713959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.478 [2024-11-27 10:03:39.714035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.478 [2024-11-27 10:03:39.714056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.478 [2024-11-27 10:03:39.714066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.478 [2024-11-27 10:03:39.714076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.478 [2024-11-27 10:03:39.714098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.478 qpair failed and we were unable to recover it. 00:31:24.478 [2024-11-27 10:03:39.723892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.478 [2024-11-27 10:03:39.724004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.478 [2024-11-27 10:03:39.724021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.478 [2024-11-27 10:03:39.724029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.478 [2024-11-27 10:03:39.724035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.478 [2024-11-27 10:03:39.724053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.478 qpair failed and we were unable to recover it. 00:31:24.478 [2024-11-27 10:03:39.733941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.478 [2024-11-27 10:03:39.734012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.478 [2024-11-27 10:03:39.734029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.478 [2024-11-27 10:03:39.734036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.478 [2024-11-27 10:03:39.734042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.478 [2024-11-27 10:03:39.734060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.478 qpair failed and we were unable to recover it. 00:31:24.478 [2024-11-27 10:03:39.743911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.478 [2024-11-27 10:03:39.743977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.478 [2024-11-27 10:03:39.743994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.478 [2024-11-27 10:03:39.744002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.478 [2024-11-27 10:03:39.744008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.478 [2024-11-27 10:03:39.744025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.478 qpair failed and we were unable to recover it. 00:31:24.478 [2024-11-27 10:03:39.753935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.478 [2024-11-27 10:03:39.754003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.478 [2024-11-27 10:03:39.754020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.478 [2024-11-27 10:03:39.754028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.478 [2024-11-27 10:03:39.754035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.478 [2024-11-27 10:03:39.754052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.478 qpair failed and we were unable to recover it. 00:31:24.478 [2024-11-27 10:03:39.763989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.478 [2024-11-27 10:03:39.764059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.478 [2024-11-27 10:03:39.764076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.478 [2024-11-27 10:03:39.764083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.478 [2024-11-27 10:03:39.764090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.478 [2024-11-27 10:03:39.764107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.478 qpair failed and we were unable to recover it. 00:31:24.478 [2024-11-27 10:03:39.774056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.478 [2024-11-27 10:03:39.774171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.478 [2024-11-27 10:03:39.774188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.478 [2024-11-27 10:03:39.774196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.478 [2024-11-27 10:03:39.774203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.478 [2024-11-27 10:03:39.774220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.478 qpair failed and we were unable to recover it. 00:31:24.478 [2024-11-27 10:03:39.784072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.478 [2024-11-27 10:03:39.784149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.478 [2024-11-27 10:03:39.784171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.478 [2024-11-27 10:03:39.784179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.478 [2024-11-27 10:03:39.784186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.478 [2024-11-27 10:03:39.784203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.478 qpair failed and we were unable to recover it. 00:31:24.478 [2024-11-27 10:03:39.794091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.478 [2024-11-27 10:03:39.794176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.478 [2024-11-27 10:03:39.794192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.478 [2024-11-27 10:03:39.794205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.478 [2024-11-27 10:03:39.794211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.478 [2024-11-27 10:03:39.794228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.478 qpair failed and we were unable to recover it. 00:31:24.478 [2024-11-27 10:03:39.804093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.478 [2024-11-27 10:03:39.804171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.478 [2024-11-27 10:03:39.804187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.478 [2024-11-27 10:03:39.804195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.804201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.479 [2024-11-27 10:03:39.804218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.479 qpair failed and we were unable to recover it. 00:31:24.479 [2024-11-27 10:03:39.814145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.479 [2024-11-27 10:03:39.814225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.479 [2024-11-27 10:03:39.814241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.479 [2024-11-27 10:03:39.814249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.814257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.479 [2024-11-27 10:03:39.814275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.479 qpair failed and we were unable to recover it. 00:31:24.479 [2024-11-27 10:03:39.824032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.479 [2024-11-27 10:03:39.824097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.479 [2024-11-27 10:03:39.824113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.479 [2024-11-27 10:03:39.824120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.824128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.479 [2024-11-27 10:03:39.824145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.479 qpair failed and we were unable to recover it. 00:31:24.479 [2024-11-27 10:03:39.834170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.479 [2024-11-27 10:03:39.834232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.479 [2024-11-27 10:03:39.834248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.479 [2024-11-27 10:03:39.834255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.834262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.479 [2024-11-27 10:03:39.834284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.479 qpair failed and we were unable to recover it. 00:31:24.479 [2024-11-27 10:03:39.844233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.479 [2024-11-27 10:03:39.844301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.479 [2024-11-27 10:03:39.844316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.479 [2024-11-27 10:03:39.844324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.844330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.479 [2024-11-27 10:03:39.844346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.479 qpair failed and we were unable to recover it. 00:31:24.479 [2024-11-27 10:03:39.854280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.479 [2024-11-27 10:03:39.854358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.479 [2024-11-27 10:03:39.854374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.479 [2024-11-27 10:03:39.854382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.854390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.479 [2024-11-27 10:03:39.854407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.479 qpair failed and we were unable to recover it. 00:31:24.479 [2024-11-27 10:03:39.864272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.479 [2024-11-27 10:03:39.864378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.479 [2024-11-27 10:03:39.864393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.479 [2024-11-27 10:03:39.864401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.864408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.479 [2024-11-27 10:03:39.864424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.479 qpair failed and we were unable to recover it. 00:31:24.479 [2024-11-27 10:03:39.874319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.479 [2024-11-27 10:03:39.874382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.479 [2024-11-27 10:03:39.874399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.479 [2024-11-27 10:03:39.874406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.874412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.479 [2024-11-27 10:03:39.874429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.479 qpair failed and we were unable to recover it. 00:31:24.479 [2024-11-27 10:03:39.884303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.479 [2024-11-27 10:03:39.884379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.479 [2024-11-27 10:03:39.884395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.479 [2024-11-27 10:03:39.884403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.884410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.479 [2024-11-27 10:03:39.884426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.479 qpair failed and we were unable to recover it. 00:31:24.479 [2024-11-27 10:03:39.894572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.479 [2024-11-27 10:03:39.894657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.479 [2024-11-27 10:03:39.894673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.479 [2024-11-27 10:03:39.894681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.894687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.479 [2024-11-27 10:03:39.894704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.479 qpair failed and we were unable to recover it. 00:31:24.479 [2024-11-27 10:03:39.904454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.479 [2024-11-27 10:03:39.904531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.479 [2024-11-27 10:03:39.904549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.479 [2024-11-27 10:03:39.904556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.904562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.479 [2024-11-27 10:03:39.904579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.479 qpair failed and we were unable to recover it. 00:31:24.479 [2024-11-27 10:03:39.914443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.479 [2024-11-27 10:03:39.914517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.479 [2024-11-27 10:03:39.914532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.479 [2024-11-27 10:03:39.914540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.914546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.479 [2024-11-27 10:03:39.914562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.479 qpair failed and we were unable to recover it. 00:31:24.479 [2024-11-27 10:03:39.924516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.479 [2024-11-27 10:03:39.924586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.479 [2024-11-27 10:03:39.924607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.479 [2024-11-27 10:03:39.924615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.924621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.479 [2024-11-27 10:03:39.924639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.479 qpair failed and we were unable to recover it. 00:31:24.479 [2024-11-27 10:03:39.934521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.479 [2024-11-27 10:03:39.934591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.479 [2024-11-27 10:03:39.934610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.479 [2024-11-27 10:03:39.934617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.479 [2024-11-27 10:03:39.934624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.480 [2024-11-27 10:03:39.934640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.480 qpair failed and we were unable to recover it. 00:31:24.742 [2024-11-27 10:03:39.944515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.742 [2024-11-27 10:03:39.944585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.742 [2024-11-27 10:03:39.944602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.742 [2024-11-27 10:03:39.944609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.742 [2024-11-27 10:03:39.944616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.742 [2024-11-27 10:03:39.944632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.742 qpair failed and we were unable to recover it. 00:31:24.742 [2024-11-27 10:03:39.954412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.742 [2024-11-27 10:03:39.954480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.742 [2024-11-27 10:03:39.954496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.742 [2024-11-27 10:03:39.954504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.742 [2024-11-27 10:03:39.954511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.742 [2024-11-27 10:03:39.954527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.742 qpair failed and we were unable to recover it. 00:31:24.742 [2024-11-27 10:03:39.964573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.742 [2024-11-27 10:03:39.964641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.742 [2024-11-27 10:03:39.964657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.742 [2024-11-27 10:03:39.964664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.742 [2024-11-27 10:03:39.964670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.742 [2024-11-27 10:03:39.964693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.742 qpair failed and we were unable to recover it. 00:31:24.742 [2024-11-27 10:03:39.974643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.742 [2024-11-27 10:03:39.974716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.742 [2024-11-27 10:03:39.974733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.742 [2024-11-27 10:03:39.974740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.742 [2024-11-27 10:03:39.974747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.742 [2024-11-27 10:03:39.974763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.742 qpair failed and we were unable to recover it. 00:31:24.742 [2024-11-27 10:03:39.984638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.742 [2024-11-27 10:03:39.984713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.742 [2024-11-27 10:03:39.984729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.742 [2024-11-27 10:03:39.984736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.742 [2024-11-27 10:03:39.984743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.742 [2024-11-27 10:03:39.984758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.742 qpair failed and we were unable to recover it. 00:31:24.742 [2024-11-27 10:03:39.994667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.742 [2024-11-27 10:03:39.994729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.742 [2024-11-27 10:03:39.994745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.742 [2024-11-27 10:03:39.994752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.742 [2024-11-27 10:03:39.994758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.742 [2024-11-27 10:03:39.994774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.742 qpair failed and we were unable to recover it. 00:31:24.742 [2024-11-27 10:03:40.004754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.742 [2024-11-27 10:03:40.004905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.742 [2024-11-27 10:03:40.004943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.742 [2024-11-27 10:03:40.004959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.742 [2024-11-27 10:03:40.004970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.742 [2024-11-27 10:03:40.005001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.742 qpair failed and we were unable to recover it. 00:31:24.742 [2024-11-27 10:03:40.014769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.742 [2024-11-27 10:03:40.014845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.742 [2024-11-27 10:03:40.014874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.742 [2024-11-27 10:03:40.014883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.742 [2024-11-27 10:03:40.014890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.742 [2024-11-27 10:03:40.014914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.742 qpair failed and we were unable to recover it. 00:31:24.742 [2024-11-27 10:03:40.024632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.742 [2024-11-27 10:03:40.024697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.742 [2024-11-27 10:03:40.024718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.742 [2024-11-27 10:03:40.024726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.742 [2024-11-27 10:03:40.024733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.742 [2024-11-27 10:03:40.024753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.742 qpair failed and we were unable to recover it. 00:31:24.742 [2024-11-27 10:03:40.034760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.742 [2024-11-27 10:03:40.034845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.742 [2024-11-27 10:03:40.034884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.742 [2024-11-27 10:03:40.034896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.742 [2024-11-27 10:03:40.034907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.742 [2024-11-27 10:03:40.034936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.742 qpair failed and we were unable to recover it. 00:31:24.742 [2024-11-27 10:03:40.044821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.742 [2024-11-27 10:03:40.044893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.742 [2024-11-27 10:03:40.044919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.742 [2024-11-27 10:03:40.044928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.742 [2024-11-27 10:03:40.044935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.742 [2024-11-27 10:03:40.044958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.742 qpair failed and we were unable to recover it. 00:31:24.742 [2024-11-27 10:03:40.054865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.742 [2024-11-27 10:03:40.054936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.742 [2024-11-27 10:03:40.054963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.054971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.743 [2024-11-27 10:03:40.054980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.743 [2024-11-27 10:03:40.055000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.743 qpair failed and we were unable to recover it. 00:31:24.743 [2024-11-27 10:03:40.064860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.743 [2024-11-27 10:03:40.064929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.743 [2024-11-27 10:03:40.064950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.064959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.743 [2024-11-27 10:03:40.064968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.743 [2024-11-27 10:03:40.064988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.743 qpair failed and we were unable to recover it. 00:31:24.743 [2024-11-27 10:03:40.074858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.743 [2024-11-27 10:03:40.074922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.743 [2024-11-27 10:03:40.074951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.074959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.743 [2024-11-27 10:03:40.074966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.743 [2024-11-27 10:03:40.074989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.743 qpair failed and we were unable to recover it. 00:31:24.743 [2024-11-27 10:03:40.084934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.743 [2024-11-27 10:03:40.085014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.743 [2024-11-27 10:03:40.085032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.085041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.743 [2024-11-27 10:03:40.085048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.743 [2024-11-27 10:03:40.085066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.743 qpair failed and we were unable to recover it. 00:31:24.743 [2024-11-27 10:03:40.095001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.743 [2024-11-27 10:03:40.095078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.743 [2024-11-27 10:03:40.095096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.095105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.743 [2024-11-27 10:03:40.095118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.743 [2024-11-27 10:03:40.095139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.743 qpair failed and we were unable to recover it. 00:31:24.743 [2024-11-27 10:03:40.104986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.743 [2024-11-27 10:03:40.105047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.743 [2024-11-27 10:03:40.105065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.105073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.743 [2024-11-27 10:03:40.105080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.743 [2024-11-27 10:03:40.105097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.743 qpair failed and we were unable to recover it. 00:31:24.743 [2024-11-27 10:03:40.115037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.743 [2024-11-27 10:03:40.115101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.743 [2024-11-27 10:03:40.115118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.115126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.743 [2024-11-27 10:03:40.115133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.743 [2024-11-27 10:03:40.115150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.743 qpair failed and we were unable to recover it. 00:31:24.743 [2024-11-27 10:03:40.125063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.743 [2024-11-27 10:03:40.125183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.743 [2024-11-27 10:03:40.125201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.125208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.743 [2024-11-27 10:03:40.125215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.743 [2024-11-27 10:03:40.125234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.743 qpair failed and we were unable to recover it. 00:31:24.743 [2024-11-27 10:03:40.135130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.743 [2024-11-27 10:03:40.135207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.743 [2024-11-27 10:03:40.135224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.135231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.743 [2024-11-27 10:03:40.135238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.743 [2024-11-27 10:03:40.135256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.743 qpair failed and we were unable to recover it. 00:31:24.743 [2024-11-27 10:03:40.145145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.743 [2024-11-27 10:03:40.145217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.743 [2024-11-27 10:03:40.145241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.145250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.743 [2024-11-27 10:03:40.145256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.743 [2024-11-27 10:03:40.145276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.743 qpair failed and we were unable to recover it. 00:31:24.743 [2024-11-27 10:03:40.155055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.743 [2024-11-27 10:03:40.155150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.743 [2024-11-27 10:03:40.155180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.155189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.743 [2024-11-27 10:03:40.155195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.743 [2024-11-27 10:03:40.155214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.743 qpair failed and we were unable to recover it. 00:31:24.743 [2024-11-27 10:03:40.165201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.743 [2024-11-27 10:03:40.165273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.743 [2024-11-27 10:03:40.165294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.165302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.743 [2024-11-27 10:03:40.165309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.743 [2024-11-27 10:03:40.165328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.743 qpair failed and we were unable to recover it. 00:31:24.743 [2024-11-27 10:03:40.175260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.743 [2024-11-27 10:03:40.175341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.743 [2024-11-27 10:03:40.175363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.175372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.743 [2024-11-27 10:03:40.175379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.743 [2024-11-27 10:03:40.175399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.743 qpair failed and we were unable to recover it. 00:31:24.743 [2024-11-27 10:03:40.185266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.743 [2024-11-27 10:03:40.185333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.743 [2024-11-27 10:03:40.185359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.743 [2024-11-27 10:03:40.185367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.744 [2024-11-27 10:03:40.185374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.744 [2024-11-27 10:03:40.185393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.744 qpair failed and we were unable to recover it. 00:31:24.744 [2024-11-27 10:03:40.195280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.744 [2024-11-27 10:03:40.195348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.744 [2024-11-27 10:03:40.195368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.744 [2024-11-27 10:03:40.195376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.744 [2024-11-27 10:03:40.195385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.744 [2024-11-27 10:03:40.195404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.744 qpair failed and we were unable to recover it. 00:31:24.744 [2024-11-27 10:03:40.205328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.744 [2024-11-27 10:03:40.205399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.744 [2024-11-27 10:03:40.205419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.744 [2024-11-27 10:03:40.205427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.744 [2024-11-27 10:03:40.205434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:24.744 [2024-11-27 10:03:40.205452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.744 qpair failed and we were unable to recover it. 00:31:25.007 [2024-11-27 10:03:40.215406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.007 [2024-11-27 10:03:40.215486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.007 [2024-11-27 10:03:40.215507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.007 [2024-11-27 10:03:40.215516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.007 [2024-11-27 10:03:40.215523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.007 [2024-11-27 10:03:40.215541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.007 qpair failed and we were unable to recover it. 00:31:25.007 [2024-11-27 10:03:40.225370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.007 [2024-11-27 10:03:40.225444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.007 [2024-11-27 10:03:40.225464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.007 [2024-11-27 10:03:40.225482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.007 [2024-11-27 10:03:40.225492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.007 [2024-11-27 10:03:40.225511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.007 qpair failed and we were unable to recover it. 00:31:25.007 [2024-11-27 10:03:40.235334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.007 [2024-11-27 10:03:40.235398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.007 [2024-11-27 10:03:40.235420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.007 [2024-11-27 10:03:40.235428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.007 [2024-11-27 10:03:40.235435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.007 [2024-11-27 10:03:40.235455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.007 qpair failed and we were unable to recover it. 00:31:25.007 [2024-11-27 10:03:40.245437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.007 [2024-11-27 10:03:40.245506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.007 [2024-11-27 10:03:40.245527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.007 [2024-11-27 10:03:40.245535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.007 [2024-11-27 10:03:40.245542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.007 [2024-11-27 10:03:40.245561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.007 qpair failed and we were unable to recover it. 00:31:25.007 [2024-11-27 10:03:40.255489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.007 [2024-11-27 10:03:40.255561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.007 [2024-11-27 10:03:40.255582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.007 [2024-11-27 10:03:40.255590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.007 [2024-11-27 10:03:40.255597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.007 [2024-11-27 10:03:40.255616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.007 qpair failed and we were unable to recover it. 00:31:25.007 [2024-11-27 10:03:40.265496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.007 [2024-11-27 10:03:40.265573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.007 [2024-11-27 10:03:40.265594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.007 [2024-11-27 10:03:40.265601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.007 [2024-11-27 10:03:40.265608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.007 [2024-11-27 10:03:40.265627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.007 qpair failed and we were unable to recover it. 00:31:25.007 [2024-11-27 10:03:40.275536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.007 [2024-11-27 10:03:40.275600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.007 [2024-11-27 10:03:40.275621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.007 [2024-11-27 10:03:40.275628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.007 [2024-11-27 10:03:40.275635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.007 [2024-11-27 10:03:40.275653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.007 qpair failed and we were unable to recover it. 00:31:25.007 [2024-11-27 10:03:40.285546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.008 [2024-11-27 10:03:40.285609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.008 [2024-11-27 10:03:40.285629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.008 [2024-11-27 10:03:40.285636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.008 [2024-11-27 10:03:40.285643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.008 [2024-11-27 10:03:40.285661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.008 qpair failed and we were unable to recover it. 00:31:25.008 [2024-11-27 10:03:40.295636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.008 [2024-11-27 10:03:40.295703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.008 [2024-11-27 10:03:40.295724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.008 [2024-11-27 10:03:40.295732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.008 [2024-11-27 10:03:40.295739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.008 [2024-11-27 10:03:40.295758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.008 qpair failed and we were unable to recover it. 00:31:25.008 [2024-11-27 10:03:40.305583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.008 [2024-11-27 10:03:40.305644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.008 [2024-11-27 10:03:40.305664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.008 [2024-11-27 10:03:40.305671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.008 [2024-11-27 10:03:40.305678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.008 [2024-11-27 10:03:40.305696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.008 qpair failed and we were unable to recover it. 00:31:25.008 [2024-11-27 10:03:40.315668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.008 [2024-11-27 10:03:40.315740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.008 [2024-11-27 10:03:40.315762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.008 [2024-11-27 10:03:40.315772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.008 [2024-11-27 10:03:40.315780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.008 [2024-11-27 10:03:40.315799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.008 qpair failed and we were unable to recover it. 00:31:25.008 [2024-11-27 10:03:40.325693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.008 [2024-11-27 10:03:40.325762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.008 [2024-11-27 10:03:40.325782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.008 [2024-11-27 10:03:40.325790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.008 [2024-11-27 10:03:40.325797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.008 [2024-11-27 10:03:40.325814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.008 qpair failed and we were unable to recover it. 00:31:25.008 [2024-11-27 10:03:40.335749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.008 [2024-11-27 10:03:40.335856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.008 [2024-11-27 10:03:40.335878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.008 [2024-11-27 10:03:40.335886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.008 [2024-11-27 10:03:40.335892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.008 [2024-11-27 10:03:40.335910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.008 qpair failed and we were unable to recover it. 00:31:25.008 [2024-11-27 10:03:40.345779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.008 [2024-11-27 10:03:40.345841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.008 [2024-11-27 10:03:40.345860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.008 [2024-11-27 10:03:40.345868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.008 [2024-11-27 10:03:40.345874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.008 [2024-11-27 10:03:40.345892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.008 qpair failed and we were unable to recover it. 00:31:25.008 [2024-11-27 10:03:40.355666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.008 [2024-11-27 10:03:40.355729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.008 [2024-11-27 10:03:40.355749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.008 [2024-11-27 10:03:40.355763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.008 [2024-11-27 10:03:40.355769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.008 [2024-11-27 10:03:40.355787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.008 qpair failed and we were unable to recover it. 00:31:25.008 [2024-11-27 10:03:40.365832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.008 [2024-11-27 10:03:40.365914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.008 [2024-11-27 10:03:40.365933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.008 [2024-11-27 10:03:40.365941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.008 [2024-11-27 10:03:40.365952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.008 [2024-11-27 10:03:40.365979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.008 qpair failed and we were unable to recover it. 00:31:25.008 [2024-11-27 10:03:40.375897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.008 [2024-11-27 10:03:40.376016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.008 [2024-11-27 10:03:40.376037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.008 [2024-11-27 10:03:40.376045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.008 [2024-11-27 10:03:40.376052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.008 [2024-11-27 10:03:40.376072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.009 qpair failed and we were unable to recover it. 00:31:25.009 [2024-11-27 10:03:40.385881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.009 [2024-11-27 10:03:40.385971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.009 [2024-11-27 10:03:40.385994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.009 [2024-11-27 10:03:40.386001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.009 [2024-11-27 10:03:40.386008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.009 [2024-11-27 10:03:40.386028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.009 qpair failed and we were unable to recover it. 00:31:25.009 [2024-11-27 10:03:40.395956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.009 [2024-11-27 10:03:40.396029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.009 [2024-11-27 10:03:40.396050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.009 [2024-11-27 10:03:40.396058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.009 [2024-11-27 10:03:40.396065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.009 [2024-11-27 10:03:40.396089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.009 qpair failed and we were unable to recover it. 00:31:25.009 [2024-11-27 10:03:40.405929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.009 [2024-11-27 10:03:40.406000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.009 [2024-11-27 10:03:40.406021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.009 [2024-11-27 10:03:40.406028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.009 [2024-11-27 10:03:40.406035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.009 [2024-11-27 10:03:40.406053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.009 qpair failed and we were unable to recover it. 00:31:25.009 [2024-11-27 10:03:40.415896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.009 [2024-11-27 10:03:40.415966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.009 [2024-11-27 10:03:40.415986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.009 [2024-11-27 10:03:40.415995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.009 [2024-11-27 10:03:40.416001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.009 [2024-11-27 10:03:40.416020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.009 qpair failed and we were unable to recover it. 00:31:25.009 [2024-11-27 10:03:40.426089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.009 [2024-11-27 10:03:40.426166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.009 [2024-11-27 10:03:40.426189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.009 [2024-11-27 10:03:40.426197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.009 [2024-11-27 10:03:40.426203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.009 [2024-11-27 10:03:40.426223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.009 qpair failed and we were unable to recover it. 00:31:25.009 [2024-11-27 10:03:40.436034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.009 [2024-11-27 10:03:40.436109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.009 [2024-11-27 10:03:40.436129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.009 [2024-11-27 10:03:40.436136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.009 [2024-11-27 10:03:40.436143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.009 [2024-11-27 10:03:40.436166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.009 qpair failed and we were unable to recover it. 00:31:25.009 [2024-11-27 10:03:40.446069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.009 [2024-11-27 10:03:40.446156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.009 [2024-11-27 10:03:40.446183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.009 [2024-11-27 10:03:40.446192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.009 [2024-11-27 10:03:40.446203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.009 [2024-11-27 10:03:40.446229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.009 qpair failed and we were unable to recover it. 00:31:25.009 [2024-11-27 10:03:40.456113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.009 [2024-11-27 10:03:40.456194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.009 [2024-11-27 10:03:40.456215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.009 [2024-11-27 10:03:40.456223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.009 [2024-11-27 10:03:40.456230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.009 [2024-11-27 10:03:40.456249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.009 qpair failed and we were unable to recover it. 00:31:25.009 [2024-11-27 10:03:40.466147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.009 [2024-11-27 10:03:40.466220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.009 [2024-11-27 10:03:40.466241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.009 [2024-11-27 10:03:40.466249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.009 [2024-11-27 10:03:40.466257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.009 [2024-11-27 10:03:40.466274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.009 qpair failed and we were unable to recover it. 00:31:25.272 [2024-11-27 10:03:40.476168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.272 [2024-11-27 10:03:40.476232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.272 [2024-11-27 10:03:40.476252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.272 [2024-11-27 10:03:40.476261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.272 [2024-11-27 10:03:40.476268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.272 [2024-11-27 10:03:40.476287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.272 qpair failed and we were unable to recover it. 00:31:25.272 [2024-11-27 10:03:40.486195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.272 [2024-11-27 10:03:40.486262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.272 [2024-11-27 10:03:40.486295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.272 [2024-11-27 10:03:40.486304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.272 [2024-11-27 10:03:40.486311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.272 [2024-11-27 10:03:40.486330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.272 qpair failed and we were unable to recover it. 00:31:25.272 [2024-11-27 10:03:40.496224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.272 [2024-11-27 10:03:40.496304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.272 [2024-11-27 10:03:40.496325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.272 [2024-11-27 10:03:40.496333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.272 [2024-11-27 10:03:40.496340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.272 [2024-11-27 10:03:40.496359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.272 qpair failed and we were unable to recover it. 00:31:25.272 [2024-11-27 10:03:40.506262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.272 [2024-11-27 10:03:40.506380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.272 [2024-11-27 10:03:40.506400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.272 [2024-11-27 10:03:40.506408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.272 [2024-11-27 10:03:40.506415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.272 [2024-11-27 10:03:40.506433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.272 qpair failed and we were unable to recover it. 00:31:25.272 [2024-11-27 10:03:40.516273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.273 [2024-11-27 10:03:40.516335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.273 [2024-11-27 10:03:40.516352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.273 [2024-11-27 10:03:40.516360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.273 [2024-11-27 10:03:40.516366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.273 [2024-11-27 10:03:40.516384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.273 qpair failed and we were unable to recover it. 00:31:25.273 [2024-11-27 10:03:40.526327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.273 [2024-11-27 10:03:40.526407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.273 [2024-11-27 10:03:40.526424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.273 [2024-11-27 10:03:40.526432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.273 [2024-11-27 10:03:40.526443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.273 [2024-11-27 10:03:40.526460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.273 qpair failed and we were unable to recover it. 00:31:25.273 [2024-11-27 10:03:40.536358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.273 [2024-11-27 10:03:40.536477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.273 [2024-11-27 10:03:40.536495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.273 [2024-11-27 10:03:40.536502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.273 [2024-11-27 10:03:40.536509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.273 [2024-11-27 10:03:40.536526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.273 qpair failed and we were unable to recover it. 00:31:25.273 [2024-11-27 10:03:40.546330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.273 [2024-11-27 10:03:40.546394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.273 [2024-11-27 10:03:40.546410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.273 [2024-11-27 10:03:40.546417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.273 [2024-11-27 10:03:40.546423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.273 [2024-11-27 10:03:40.546440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.273 qpair failed and we were unable to recover it. 00:31:25.273 [2024-11-27 10:03:40.556415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.273 [2024-11-27 10:03:40.556486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.273 [2024-11-27 10:03:40.556503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.273 [2024-11-27 10:03:40.556510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.273 [2024-11-27 10:03:40.556517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.273 [2024-11-27 10:03:40.556533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.273 qpair failed and we were unable to recover it. 00:31:25.273 [2024-11-27 10:03:40.566400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.273 [2024-11-27 10:03:40.566466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.273 [2024-11-27 10:03:40.566482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.273 [2024-11-27 10:03:40.566490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.273 [2024-11-27 10:03:40.566496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.273 [2024-11-27 10:03:40.566512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.273 qpair failed and we were unable to recover it. 00:31:25.273 [2024-11-27 10:03:40.576480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.273 [2024-11-27 10:03:40.576556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.273 [2024-11-27 10:03:40.576573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.273 [2024-11-27 10:03:40.576580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.273 [2024-11-27 10:03:40.576586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.273 [2024-11-27 10:03:40.576603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.273 qpair failed and we were unable to recover it. 00:31:25.273 [2024-11-27 10:03:40.586468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.273 [2024-11-27 10:03:40.586530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.273 [2024-11-27 10:03:40.586546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.273 [2024-11-27 10:03:40.586553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.273 [2024-11-27 10:03:40.586560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.273 [2024-11-27 10:03:40.586576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.273 qpair failed and we were unable to recover it. 00:31:25.273 [2024-11-27 10:03:40.596510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.273 [2024-11-27 10:03:40.596574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.273 [2024-11-27 10:03:40.596590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.273 [2024-11-27 10:03:40.596597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.273 [2024-11-27 10:03:40.596604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.273 [2024-11-27 10:03:40.596621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.273 qpair failed and we were unable to recover it. 00:31:25.273 [2024-11-27 10:03:40.606564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.273 [2024-11-27 10:03:40.606630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.273 [2024-11-27 10:03:40.606647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.273 [2024-11-27 10:03:40.606654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.273 [2024-11-27 10:03:40.606660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.273 [2024-11-27 10:03:40.606676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.273 qpair failed and we were unable to recover it. 00:31:25.274 [2024-11-27 10:03:40.616604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.274 [2024-11-27 10:03:40.616693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.274 [2024-11-27 10:03:40.616747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.274 [2024-11-27 10:03:40.616755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.274 [2024-11-27 10:03:40.616762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.274 [2024-11-27 10:03:40.616792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.274 qpair failed and we were unable to recover it. 00:31:25.274 [2024-11-27 10:03:40.626647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.274 [2024-11-27 10:03:40.626714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.274 [2024-11-27 10:03:40.626733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.274 [2024-11-27 10:03:40.626740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.274 [2024-11-27 10:03:40.626747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.274 [2024-11-27 10:03:40.626765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.274 qpair failed and we were unable to recover it. 00:31:25.274 [2024-11-27 10:03:40.636543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.274 [2024-11-27 10:03:40.636602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.274 [2024-11-27 10:03:40.636620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.274 [2024-11-27 10:03:40.636627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.274 [2024-11-27 10:03:40.636633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.274 [2024-11-27 10:03:40.636651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.274 qpair failed and we were unable to recover it. 00:31:25.274 [2024-11-27 10:03:40.646663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.274 [2024-11-27 10:03:40.646728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.274 [2024-11-27 10:03:40.646744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.274 [2024-11-27 10:03:40.646751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.274 [2024-11-27 10:03:40.646757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.274 [2024-11-27 10:03:40.646774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.274 qpair failed and we were unable to recover it. 00:31:25.274 [2024-11-27 10:03:40.656639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.274 [2024-11-27 10:03:40.656714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.274 [2024-11-27 10:03:40.656735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.274 [2024-11-27 10:03:40.656742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.274 [2024-11-27 10:03:40.656754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.274 [2024-11-27 10:03:40.656773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.274 qpair failed and we were unable to recover it. 00:31:25.274 [2024-11-27 10:03:40.666733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.274 [2024-11-27 10:03:40.666830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.274 [2024-11-27 10:03:40.666847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.274 [2024-11-27 10:03:40.666855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.274 [2024-11-27 10:03:40.666861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.274 [2024-11-27 10:03:40.666878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.274 qpair failed and we were unable to recover it. 00:31:25.274 [2024-11-27 10:03:40.676775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.274 [2024-11-27 10:03:40.676839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.274 [2024-11-27 10:03:40.676855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.274 [2024-11-27 10:03:40.676862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.274 [2024-11-27 10:03:40.676869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.274 [2024-11-27 10:03:40.676885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.274 qpair failed and we were unable to recover it. 00:31:25.274 [2024-11-27 10:03:40.686846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.274 [2024-11-27 10:03:40.686930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.274 [2024-11-27 10:03:40.686946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.274 [2024-11-27 10:03:40.686953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.274 [2024-11-27 10:03:40.686960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.274 [2024-11-27 10:03:40.686976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.274 qpair failed and we were unable to recover it. 00:31:25.274 [2024-11-27 10:03:40.696740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.274 [2024-11-27 10:03:40.696804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.274 [2024-11-27 10:03:40.696821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.274 [2024-11-27 10:03:40.696828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.274 [2024-11-27 10:03:40.696835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.274 [2024-11-27 10:03:40.696851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.274 qpair failed and we were unable to recover it. 00:31:25.274 [2024-11-27 10:03:40.706884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.274 [2024-11-27 10:03:40.706941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.274 [2024-11-27 10:03:40.706960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.274 [2024-11-27 10:03:40.706967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.274 [2024-11-27 10:03:40.706973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.274 [2024-11-27 10:03:40.706991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.274 qpair failed and we were unable to recover it. 00:31:25.274 [2024-11-27 10:03:40.716895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.274 [2024-11-27 10:03:40.716991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.274 [2024-11-27 10:03:40.717007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.274 [2024-11-27 10:03:40.717015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.275 [2024-11-27 10:03:40.717021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.275 [2024-11-27 10:03:40.717037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.275 qpair failed and we were unable to recover it. 00:31:25.275 [2024-11-27 10:03:40.726937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.275 [2024-11-27 10:03:40.727049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.275 [2024-11-27 10:03:40.727065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.275 [2024-11-27 10:03:40.727073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.275 [2024-11-27 10:03:40.727079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.275 [2024-11-27 10:03:40.727096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.275 qpair failed and we were unable to recover it. 00:31:25.537 [2024-11-27 10:03:40.737014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.537 [2024-11-27 10:03:40.737089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.537 [2024-11-27 10:03:40.737106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.537 [2024-11-27 10:03:40.737113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.537 [2024-11-27 10:03:40.737120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.537 [2024-11-27 10:03:40.737137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.537 qpair failed and we were unable to recover it. 00:31:25.537 [2024-11-27 10:03:40.746946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.537 [2024-11-27 10:03:40.747011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.537 [2024-11-27 10:03:40.747033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.537 [2024-11-27 10:03:40.747040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.537 [2024-11-27 10:03:40.747047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.537 [2024-11-27 10:03:40.747064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.537 qpair failed and we were unable to recover it. 00:31:25.537 [2024-11-27 10:03:40.756989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.537 [2024-11-27 10:03:40.757056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.537 [2024-11-27 10:03:40.757073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.537 [2024-11-27 10:03:40.757081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.537 [2024-11-27 10:03:40.757087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.537 [2024-11-27 10:03:40.757104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.537 qpair failed and we were unable to recover it. 00:31:25.537 [2024-11-27 10:03:40.767032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.537 [2024-11-27 10:03:40.767101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.537 [2024-11-27 10:03:40.767118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.537 [2024-11-27 10:03:40.767125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.537 [2024-11-27 10:03:40.767131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.537 [2024-11-27 10:03:40.767148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.537 qpair failed and we were unable to recover it. 00:31:25.537 [2024-11-27 10:03:40.777117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.537 [2024-11-27 10:03:40.777189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.537 [2024-11-27 10:03:40.777206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.537 [2024-11-27 10:03:40.777213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.537 [2024-11-27 10:03:40.777220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.537 [2024-11-27 10:03:40.777236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.537 qpair failed and we were unable to recover it. 00:31:25.537 [2024-11-27 10:03:40.786998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.537 [2024-11-27 10:03:40.787065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.537 [2024-11-27 10:03:40.787084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.537 [2024-11-27 10:03:40.787097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.537 [2024-11-27 10:03:40.787103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.537 [2024-11-27 10:03:40.787121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.537 qpair failed and we were unable to recover it. 00:31:25.537 [2024-11-27 10:03:40.797115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.537 [2024-11-27 10:03:40.797183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.537 [2024-11-27 10:03:40.797202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.537 [2024-11-27 10:03:40.797209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.537 [2024-11-27 10:03:40.797215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.537 [2024-11-27 10:03:40.797233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.537 qpair failed and we were unable to recover it. 00:31:25.537 [2024-11-27 10:03:40.807233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.537 [2024-11-27 10:03:40.807298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.537 [2024-11-27 10:03:40.807314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.537 [2024-11-27 10:03:40.807321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.537 [2024-11-27 10:03:40.807327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.537 [2024-11-27 10:03:40.807344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.537 qpair failed and we were unable to recover it. 00:31:25.537 [2024-11-27 10:03:40.817231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.537 [2024-11-27 10:03:40.817309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.537 [2024-11-27 10:03:40.817325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.537 [2024-11-27 10:03:40.817334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.537 [2024-11-27 10:03:40.817341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.537 [2024-11-27 10:03:40.817359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.537 qpair failed and we were unable to recover it. 00:31:25.537 [2024-11-27 10:03:40.827199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.537 [2024-11-27 10:03:40.827273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.537 [2024-11-27 10:03:40.827289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.537 [2024-11-27 10:03:40.827296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.537 [2024-11-27 10:03:40.827302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.537 [2024-11-27 10:03:40.827319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.537 qpair failed and we were unable to recover it. 00:31:25.537 [2024-11-27 10:03:40.837276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.537 [2024-11-27 10:03:40.837339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.837355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.538 [2024-11-27 10:03:40.837363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.538 [2024-11-27 10:03:40.837369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.538 [2024-11-27 10:03:40.837385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.538 qpair failed and we were unable to recover it. 00:31:25.538 [2024-11-27 10:03:40.847284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.538 [2024-11-27 10:03:40.847350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.847366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.538 [2024-11-27 10:03:40.847373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.538 [2024-11-27 10:03:40.847379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.538 [2024-11-27 10:03:40.847396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.538 qpair failed and we were unable to recover it. 00:31:25.538 [2024-11-27 10:03:40.857360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.538 [2024-11-27 10:03:40.857474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.857490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.538 [2024-11-27 10:03:40.857499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.538 [2024-11-27 10:03:40.857505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.538 [2024-11-27 10:03:40.857521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.538 qpair failed and we were unable to recover it. 00:31:25.538 [2024-11-27 10:03:40.867395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.538 [2024-11-27 10:03:40.867460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.867476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.538 [2024-11-27 10:03:40.867483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.538 [2024-11-27 10:03:40.867490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.538 [2024-11-27 10:03:40.867507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.538 qpair failed and we were unable to recover it. 00:31:25.538 [2024-11-27 10:03:40.877390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.538 [2024-11-27 10:03:40.877457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.877473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.538 [2024-11-27 10:03:40.877481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.538 [2024-11-27 10:03:40.877487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.538 [2024-11-27 10:03:40.877504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.538 qpair failed and we were unable to recover it. 00:31:25.538 [2024-11-27 10:03:40.887434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.538 [2024-11-27 10:03:40.887510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.887527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.538 [2024-11-27 10:03:40.887534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.538 [2024-11-27 10:03:40.887541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.538 [2024-11-27 10:03:40.887558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.538 qpair failed and we were unable to recover it. 00:31:25.538 [2024-11-27 10:03:40.897507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.538 [2024-11-27 10:03:40.897635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.897652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.538 [2024-11-27 10:03:40.897660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.538 [2024-11-27 10:03:40.897666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.538 [2024-11-27 10:03:40.897683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.538 qpair failed and we were unable to recover it. 00:31:25.538 [2024-11-27 10:03:40.907480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.538 [2024-11-27 10:03:40.907539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.907555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.538 [2024-11-27 10:03:40.907562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.538 [2024-11-27 10:03:40.907568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.538 [2024-11-27 10:03:40.907585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.538 qpair failed and we were unable to recover it. 00:31:25.538 [2024-11-27 10:03:40.917517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.538 [2024-11-27 10:03:40.917578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.917594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.538 [2024-11-27 10:03:40.917606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.538 [2024-11-27 10:03:40.917612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.538 [2024-11-27 10:03:40.917630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.538 qpair failed and we were unable to recover it. 00:31:25.538 [2024-11-27 10:03:40.927524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.538 [2024-11-27 10:03:40.927590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.927607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.538 [2024-11-27 10:03:40.927614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.538 [2024-11-27 10:03:40.927620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.538 [2024-11-27 10:03:40.927637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.538 qpair failed and we were unable to recover it. 00:31:25.538 [2024-11-27 10:03:40.937588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.538 [2024-11-27 10:03:40.937650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.937666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.538 [2024-11-27 10:03:40.937674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.538 [2024-11-27 10:03:40.937680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.538 [2024-11-27 10:03:40.937696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.538 qpair failed and we were unable to recover it. 00:31:25.538 [2024-11-27 10:03:40.947615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.538 [2024-11-27 10:03:40.947692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.947708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.538 [2024-11-27 10:03:40.947715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.538 [2024-11-27 10:03:40.947722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.538 [2024-11-27 10:03:40.947738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.538 qpair failed and we were unable to recover it. 00:31:25.538 [2024-11-27 10:03:40.957607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.538 [2024-11-27 10:03:40.957670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.957686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.538 [2024-11-27 10:03:40.957694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.538 [2024-11-27 10:03:40.957700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.538 [2024-11-27 10:03:40.957722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.538 qpair failed and we were unable to recover it. 00:31:25.538 [2024-11-27 10:03:40.967690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.538 [2024-11-27 10:03:40.967756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.538 [2024-11-27 10:03:40.967773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.539 [2024-11-27 10:03:40.967780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.539 [2024-11-27 10:03:40.967787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.539 [2024-11-27 10:03:40.967803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.539 qpair failed and we were unable to recover it. 00:31:25.539 [2024-11-27 10:03:40.977686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.539 [2024-11-27 10:03:40.977761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.539 [2024-11-27 10:03:40.977778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.539 [2024-11-27 10:03:40.977785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.539 [2024-11-27 10:03:40.977791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.539 [2024-11-27 10:03:40.977808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.539 qpair failed and we were unable to recover it. 00:31:25.539 [2024-11-27 10:03:40.987742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.539 [2024-11-27 10:03:40.987814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.539 [2024-11-27 10:03:40.987830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.539 [2024-11-27 10:03:40.987837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.539 [2024-11-27 10:03:40.987843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.539 [2024-11-27 10:03:40.987860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.539 qpair failed and we were unable to recover it. 00:31:25.539 [2024-11-27 10:03:40.997761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.539 [2024-11-27 10:03:40.997825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.539 [2024-11-27 10:03:40.997842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.539 [2024-11-27 10:03:40.997849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.539 [2024-11-27 10:03:40.997855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.539 [2024-11-27 10:03:40.997872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.539 qpair failed and we were unable to recover it. 00:31:25.801 [2024-11-27 10:03:41.007817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.801 [2024-11-27 10:03:41.007882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.801 [2024-11-27 10:03:41.007908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.801 [2024-11-27 10:03:41.007916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.801 [2024-11-27 10:03:41.007922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.801 [2024-11-27 10:03:41.007941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.801 qpair failed and we were unable to recover it. 00:31:25.801 [2024-11-27 10:03:41.017810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.801 [2024-11-27 10:03:41.017879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.801 [2024-11-27 10:03:41.017895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.801 [2024-11-27 10:03:41.017903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.801 [2024-11-27 10:03:41.017909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.801 [2024-11-27 10:03:41.017926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.801 qpair failed and we were unable to recover it. 00:31:25.801 [2024-11-27 10:03:41.027855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.801 [2024-11-27 10:03:41.027913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.801 [2024-11-27 10:03:41.027929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.801 [2024-11-27 10:03:41.027936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.801 [2024-11-27 10:03:41.027943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.801 [2024-11-27 10:03:41.027960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.801 qpair failed and we were unable to recover it. 00:31:25.801 [2024-11-27 10:03:41.037876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.801 [2024-11-27 10:03:41.037941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.801 [2024-11-27 10:03:41.037957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.801 [2024-11-27 10:03:41.037964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.801 [2024-11-27 10:03:41.037971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.801 [2024-11-27 10:03:41.037988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.801 qpair failed and we were unable to recover it. 00:31:25.801 [2024-11-27 10:03:41.047922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.801 [2024-11-27 10:03:41.047994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.801 [2024-11-27 10:03:41.048034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.801 [2024-11-27 10:03:41.048044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.801 [2024-11-27 10:03:41.048052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.801 [2024-11-27 10:03:41.048076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.801 qpair failed and we were unable to recover it. 00:31:25.801 [2024-11-27 10:03:41.057924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.801 [2024-11-27 10:03:41.058001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.801 [2024-11-27 10:03:41.058021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.801 [2024-11-27 10:03:41.058029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.801 [2024-11-27 10:03:41.058036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.801 [2024-11-27 10:03:41.058055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.801 qpair failed and we were unable to recover it. 00:31:25.801 [2024-11-27 10:03:41.067844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.801 [2024-11-27 10:03:41.067903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.801 [2024-11-27 10:03:41.067923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.801 [2024-11-27 10:03:41.067931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.801 [2024-11-27 10:03:41.067938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.801 [2024-11-27 10:03:41.067956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.801 qpair failed and we were unable to recover it. 00:31:25.801 [2024-11-27 10:03:41.077996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.801 [2024-11-27 10:03:41.078060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.801 [2024-11-27 10:03:41.078076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.801 [2024-11-27 10:03:41.078083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.801 [2024-11-27 10:03:41.078090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.801 [2024-11-27 10:03:41.078107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.801 qpair failed and we were unable to recover it. 00:31:25.801 [2024-11-27 10:03:41.088041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.801 [2024-11-27 10:03:41.088112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.801 [2024-11-27 10:03:41.088129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.801 [2024-11-27 10:03:41.088136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.801 [2024-11-27 10:03:41.088148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.801 [2024-11-27 10:03:41.088169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.801 qpair failed and we were unable to recover it. 00:31:25.801 [2024-11-27 10:03:41.098110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.801 [2024-11-27 10:03:41.098182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.801 [2024-11-27 10:03:41.098201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.801 [2024-11-27 10:03:41.098209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.801 [2024-11-27 10:03:41.098215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.801 [2024-11-27 10:03:41.098233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.801 qpair failed and we were unable to recover it. 00:31:25.801 [2024-11-27 10:03:41.108104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.801 [2024-11-27 10:03:41.108170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.801 [2024-11-27 10:03:41.108188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.801 [2024-11-27 10:03:41.108195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.801 [2024-11-27 10:03:41.108202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.108219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.802 qpair failed and we were unable to recover it. 00:31:25.802 [2024-11-27 10:03:41.118114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.802 [2024-11-27 10:03:41.118188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.802 [2024-11-27 10:03:41.118206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.802 [2024-11-27 10:03:41.118214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.802 [2024-11-27 10:03:41.118223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.118240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.802 qpair failed and we were unable to recover it. 00:31:25.802 [2024-11-27 10:03:41.128169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.802 [2024-11-27 10:03:41.128235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.802 [2024-11-27 10:03:41.128250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.802 [2024-11-27 10:03:41.128258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.802 [2024-11-27 10:03:41.128264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.128281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.802 qpair failed and we were unable to recover it. 00:31:25.802 [2024-11-27 10:03:41.138220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.802 [2024-11-27 10:03:41.138295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.802 [2024-11-27 10:03:41.138312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.802 [2024-11-27 10:03:41.138320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.802 [2024-11-27 10:03:41.138326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.138343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.802 qpair failed and we were unable to recover it. 00:31:25.802 [2024-11-27 10:03:41.148216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.802 [2024-11-27 10:03:41.148283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.802 [2024-11-27 10:03:41.148300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.802 [2024-11-27 10:03:41.148308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.802 [2024-11-27 10:03:41.148314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.148331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.802 qpair failed and we were unable to recover it. 00:31:25.802 [2024-11-27 10:03:41.158104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.802 [2024-11-27 10:03:41.158168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.802 [2024-11-27 10:03:41.158185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.802 [2024-11-27 10:03:41.158193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.802 [2024-11-27 10:03:41.158199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.158216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.802 qpair failed and we were unable to recover it. 00:31:25.802 [2024-11-27 10:03:41.168262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.802 [2024-11-27 10:03:41.168329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.802 [2024-11-27 10:03:41.168344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.802 [2024-11-27 10:03:41.168352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.802 [2024-11-27 10:03:41.168358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.168375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.802 qpair failed and we were unable to recover it. 00:31:25.802 [2024-11-27 10:03:41.178332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.802 [2024-11-27 10:03:41.178401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.802 [2024-11-27 10:03:41.178422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.802 [2024-11-27 10:03:41.178430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.802 [2024-11-27 10:03:41.178436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.178453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.802 qpair failed and we were unable to recover it. 00:31:25.802 [2024-11-27 10:03:41.188339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.802 [2024-11-27 10:03:41.188406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.802 [2024-11-27 10:03:41.188422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.802 [2024-11-27 10:03:41.188429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.802 [2024-11-27 10:03:41.188435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.188452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.802 qpair failed and we were unable to recover it. 00:31:25.802 [2024-11-27 10:03:41.198348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.802 [2024-11-27 10:03:41.198447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.802 [2024-11-27 10:03:41.198464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.802 [2024-11-27 10:03:41.198471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.802 [2024-11-27 10:03:41.198478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.198494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.802 qpair failed and we were unable to recover it. 00:31:25.802 [2024-11-27 10:03:41.208297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.802 [2024-11-27 10:03:41.208365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.802 [2024-11-27 10:03:41.208382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.802 [2024-11-27 10:03:41.208389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.802 [2024-11-27 10:03:41.208396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.208412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.802 qpair failed and we were unable to recover it. 00:31:25.802 [2024-11-27 10:03:41.218525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.802 [2024-11-27 10:03:41.218642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.802 [2024-11-27 10:03:41.218658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.802 [2024-11-27 10:03:41.218666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.802 [2024-11-27 10:03:41.218677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.218694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.802 qpair failed and we were unable to recover it. 00:31:25.802 [2024-11-27 10:03:41.228431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.802 [2024-11-27 10:03:41.228492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.802 [2024-11-27 10:03:41.228509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.802 [2024-11-27 10:03:41.228516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.802 [2024-11-27 10:03:41.228523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.228539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.802 qpair failed and we were unable to recover it. 00:31:25.802 [2024-11-27 10:03:41.238430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.802 [2024-11-27 10:03:41.238497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.802 [2024-11-27 10:03:41.238513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.802 [2024-11-27 10:03:41.238520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.802 [2024-11-27 10:03:41.238527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.802 [2024-11-27 10:03:41.238544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.803 qpair failed and we were unable to recover it. 00:31:25.803 [2024-11-27 10:03:41.248514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.803 [2024-11-27 10:03:41.248579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.803 [2024-11-27 10:03:41.248595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.803 [2024-11-27 10:03:41.248602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.803 [2024-11-27 10:03:41.248609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.803 [2024-11-27 10:03:41.248625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.803 qpair failed and we were unable to recover it. 00:31:25.803 [2024-11-27 10:03:41.258574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.803 [2024-11-27 10:03:41.258644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.803 [2024-11-27 10:03:41.258659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.803 [2024-11-27 10:03:41.258667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.803 [2024-11-27 10:03:41.258673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:25.803 [2024-11-27 10:03:41.258690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.803 qpair failed and we were unable to recover it. 00:31:26.065 [2024-11-27 10:03:41.268566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.065 [2024-11-27 10:03:41.268622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.065 [2024-11-27 10:03:41.268639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.065 [2024-11-27 10:03:41.268646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.065 [2024-11-27 10:03:41.268652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.065 [2024-11-27 10:03:41.268669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.065 qpair failed and we were unable to recover it. 00:31:26.065 [2024-11-27 10:03:41.278539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.065 [2024-11-27 10:03:41.278594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.065 [2024-11-27 10:03:41.278610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.065 [2024-11-27 10:03:41.278617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.065 [2024-11-27 10:03:41.278623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.065 [2024-11-27 10:03:41.278640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.065 qpair failed and we were unable to recover it. 00:31:26.065 [2024-11-27 10:03:41.288610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.065 [2024-11-27 10:03:41.288672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.065 [2024-11-27 10:03:41.288687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.065 [2024-11-27 10:03:41.288695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.065 [2024-11-27 10:03:41.288702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.065 [2024-11-27 10:03:41.288718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.065 qpair failed and we were unable to recover it. 00:31:26.065 [2024-11-27 10:03:41.298672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.065 [2024-11-27 10:03:41.298747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.065 [2024-11-27 10:03:41.298763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.065 [2024-11-27 10:03:41.298771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.065 [2024-11-27 10:03:41.298777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.065 [2024-11-27 10:03:41.298793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.065 qpair failed and we were unable to recover it. 00:31:26.065 [2024-11-27 10:03:41.308674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.065 [2024-11-27 10:03:41.308748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.065 [2024-11-27 10:03:41.308769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.065 [2024-11-27 10:03:41.308776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.065 [2024-11-27 10:03:41.308783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.065 [2024-11-27 10:03:41.308799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.065 qpair failed and we were unable to recover it. 00:31:26.065 [2024-11-27 10:03:41.318570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.065 [2024-11-27 10:03:41.318633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.065 [2024-11-27 10:03:41.318655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.065 [2024-11-27 10:03:41.318663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.065 [2024-11-27 10:03:41.318670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.065 [2024-11-27 10:03:41.318688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.065 qpair failed and we were unable to recover it. 00:31:26.065 [2024-11-27 10:03:41.328728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.065 [2024-11-27 10:03:41.328792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.065 [2024-11-27 10:03:41.328810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.065 [2024-11-27 10:03:41.328818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.065 [2024-11-27 10:03:41.328824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.065 [2024-11-27 10:03:41.328842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.065 qpair failed and we were unable to recover it. 00:31:26.065 [2024-11-27 10:03:41.338815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.066 [2024-11-27 10:03:41.338895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.066 [2024-11-27 10:03:41.338911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.066 [2024-11-27 10:03:41.338918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.066 [2024-11-27 10:03:41.338925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.066 [2024-11-27 10:03:41.338942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.066 qpair failed and we were unable to recover it. 00:31:26.066 [2024-11-27 10:03:41.348762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.066 [2024-11-27 10:03:41.348862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.066 [2024-11-27 10:03:41.348897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.066 [2024-11-27 10:03:41.348913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.066 [2024-11-27 10:03:41.348921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.066 [2024-11-27 10:03:41.348944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.066 qpair failed and we were unable to recover it. 00:31:26.066 [2024-11-27 10:03:41.358814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.066 [2024-11-27 10:03:41.358885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.066 [2024-11-27 10:03:41.358920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.066 [2024-11-27 10:03:41.358929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.066 [2024-11-27 10:03:41.358936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.066 [2024-11-27 10:03:41.358960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.066 qpair failed and we were unable to recover it. 00:31:26.066 [2024-11-27 10:03:41.368829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.066 [2024-11-27 10:03:41.368896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.066 [2024-11-27 10:03:41.368915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.066 [2024-11-27 10:03:41.368922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.066 [2024-11-27 10:03:41.368929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.066 [2024-11-27 10:03:41.368947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.066 qpair failed and we were unable to recover it. 00:31:26.066 [2024-11-27 10:03:41.378777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.066 [2024-11-27 10:03:41.378843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.066 [2024-11-27 10:03:41.378860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.066 [2024-11-27 10:03:41.378868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.066 [2024-11-27 10:03:41.378874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.066 [2024-11-27 10:03:41.378892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.066 qpair failed and we were unable to recover it. 00:31:26.066 [2024-11-27 10:03:41.388884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.066 [2024-11-27 10:03:41.389003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.066 [2024-11-27 10:03:41.389037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.066 [2024-11-27 10:03:41.389047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.066 [2024-11-27 10:03:41.389054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.066 [2024-11-27 10:03:41.389078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.066 qpair failed and we were unable to recover it. 00:31:26.066 [2024-11-27 10:03:41.398921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.066 [2024-11-27 10:03:41.398975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.066 [2024-11-27 10:03:41.398997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.066 [2024-11-27 10:03:41.399004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.066 [2024-11-27 10:03:41.399011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.066 [2024-11-27 10:03:41.399030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.066 qpair failed and we were unable to recover it. 00:31:26.066 [2024-11-27 10:03:41.408913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.066 [2024-11-27 10:03:41.408976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.066 [2024-11-27 10:03:41.408993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.066 [2024-11-27 10:03:41.409000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.066 [2024-11-27 10:03:41.409007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.066 [2024-11-27 10:03:41.409024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.066 qpair failed and we were unable to recover it. 00:31:26.066 [2024-11-27 10:03:41.418884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.066 [2024-11-27 10:03:41.418952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.066 [2024-11-27 10:03:41.418969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.066 [2024-11-27 10:03:41.418976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.066 [2024-11-27 10:03:41.418983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.066 [2024-11-27 10:03:41.419000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.066 qpair failed and we were unable to recover it. 00:31:26.066 [2024-11-27 10:03:41.428982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.066 [2024-11-27 10:03:41.429038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.066 [2024-11-27 10:03:41.429055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.066 [2024-11-27 10:03:41.429062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.066 [2024-11-27 10:03:41.429068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.066 [2024-11-27 10:03:41.429085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.066 qpair failed and we were unable to recover it. 00:31:26.066 [2024-11-27 10:03:41.438899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.066 [2024-11-27 10:03:41.439010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.066 [2024-11-27 10:03:41.439030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.066 [2024-11-27 10:03:41.439038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.066 [2024-11-27 10:03:41.439045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.066 [2024-11-27 10:03:41.439064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.066 qpair failed and we were unable to recover it. 00:31:26.066 [2024-11-27 10:03:41.448969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.066 [2024-11-27 10:03:41.449039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.066 [2024-11-27 10:03:41.449057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.066 [2024-11-27 10:03:41.449064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.066 [2024-11-27 10:03:41.449071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.066 [2024-11-27 10:03:41.449088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.066 qpair failed and we were unable to recover it. 00:31:26.066 [2024-11-27 10:03:41.459110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.066 [2024-11-27 10:03:41.459189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.066 [2024-11-27 10:03:41.459207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.066 [2024-11-27 10:03:41.459215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.066 [2024-11-27 10:03:41.459221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.066 [2024-11-27 10:03:41.459239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.066 qpair failed and we were unable to recover it. 00:31:26.066 [2024-11-27 10:03:41.469120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.067 [2024-11-27 10:03:41.469226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.067 [2024-11-27 10:03:41.469242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.067 [2024-11-27 10:03:41.469250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.067 [2024-11-27 10:03:41.469256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.067 [2024-11-27 10:03:41.469274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.067 qpair failed and we were unable to recover it. 00:31:26.067 [2024-11-27 10:03:41.479165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.067 [2024-11-27 10:03:41.479232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.067 [2024-11-27 10:03:41.479248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.067 [2024-11-27 10:03:41.479262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.067 [2024-11-27 10:03:41.479269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.067 [2024-11-27 10:03:41.479286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.067 qpair failed and we were unable to recover it. 00:31:26.067 [2024-11-27 10:03:41.489087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.067 [2024-11-27 10:03:41.489153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.067 [2024-11-27 10:03:41.489176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.067 [2024-11-27 10:03:41.489184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.067 [2024-11-27 10:03:41.489190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.067 [2024-11-27 10:03:41.489207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.067 qpair failed and we were unable to recover it. 00:31:26.067 [2024-11-27 10:03:41.499241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.067 [2024-11-27 10:03:41.499312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.067 [2024-11-27 10:03:41.499329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.067 [2024-11-27 10:03:41.499336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.067 [2024-11-27 10:03:41.499343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.067 [2024-11-27 10:03:41.499359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.067 qpair failed and we were unable to recover it. 00:31:26.067 [2024-11-27 10:03:41.509253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.067 [2024-11-27 10:03:41.509315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.067 [2024-11-27 10:03:41.509330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.067 [2024-11-27 10:03:41.509337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.067 [2024-11-27 10:03:41.509344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.067 [2024-11-27 10:03:41.509360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.067 qpair failed and we were unable to recover it. 00:31:26.067 [2024-11-27 10:03:41.519253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.067 [2024-11-27 10:03:41.519306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.067 [2024-11-27 10:03:41.519321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.067 [2024-11-27 10:03:41.519328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.067 [2024-11-27 10:03:41.519335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.067 [2024-11-27 10:03:41.519357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.067 qpair failed and we were unable to recover it. 00:31:26.067 [2024-11-27 10:03:41.529320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.067 [2024-11-27 10:03:41.529387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.067 [2024-11-27 10:03:41.529402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.067 [2024-11-27 10:03:41.529409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.067 [2024-11-27 10:03:41.529416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.067 [2024-11-27 10:03:41.529432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.067 qpair failed and we were unable to recover it. 00:31:26.329 [2024-11-27 10:03:41.539352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.329 [2024-11-27 10:03:41.539428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.329 [2024-11-27 10:03:41.539444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.329 [2024-11-27 10:03:41.539451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.329 [2024-11-27 10:03:41.539458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.329 [2024-11-27 10:03:41.539475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.329 qpair failed and we were unable to recover it. 00:31:26.329 [2024-11-27 10:03:41.549401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.329 [2024-11-27 10:03:41.549476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.329 [2024-11-27 10:03:41.549493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.329 [2024-11-27 10:03:41.549500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.329 [2024-11-27 10:03:41.549507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.329 [2024-11-27 10:03:41.549524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.329 qpair failed and we were unable to recover it. 00:31:26.329 [2024-11-27 10:03:41.559404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.329 [2024-11-27 10:03:41.559465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.329 [2024-11-27 10:03:41.559481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.329 [2024-11-27 10:03:41.559489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.329 [2024-11-27 10:03:41.559496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.329 [2024-11-27 10:03:41.559513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.329 qpair failed and we were unable to recover it. 00:31:26.329 [2024-11-27 10:03:41.569486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.329 [2024-11-27 10:03:41.569551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.329 [2024-11-27 10:03:41.569568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.329 [2024-11-27 10:03:41.569575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.329 [2024-11-27 10:03:41.569582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.329 [2024-11-27 10:03:41.569600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.329 qpair failed and we were unable to recover it. 00:31:26.329 [2024-11-27 10:03:41.579544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.329 [2024-11-27 10:03:41.579614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.329 [2024-11-27 10:03:41.579631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.329 [2024-11-27 10:03:41.579642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.329 [2024-11-27 10:03:41.579650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.329 [2024-11-27 10:03:41.579667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.329 qpair failed and we were unable to recover it. 00:31:26.329 [2024-11-27 10:03:41.589508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.329 [2024-11-27 10:03:41.589569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.329 [2024-11-27 10:03:41.589586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.329 [2024-11-27 10:03:41.589594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.329 [2024-11-27 10:03:41.589601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.329 [2024-11-27 10:03:41.589618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.329 qpair failed and we were unable to recover it. 00:31:26.329 [2024-11-27 10:03:41.599416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.329 [2024-11-27 10:03:41.599484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.329 [2024-11-27 10:03:41.599504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.329 [2024-11-27 10:03:41.599513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.329 [2024-11-27 10:03:41.599520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.329 [2024-11-27 10:03:41.599538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.329 qpair failed and we were unable to recover it. 00:31:26.329 [2024-11-27 10:03:41.609640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.329 [2024-11-27 10:03:41.609707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.329 [2024-11-27 10:03:41.609730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.329 [2024-11-27 10:03:41.609738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.329 [2024-11-27 10:03:41.609745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.329 [2024-11-27 10:03:41.609763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.329 qpair failed and we were unable to recover it. 00:31:26.329 [2024-11-27 10:03:41.619627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.329 [2024-11-27 10:03:41.619731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.329 [2024-11-27 10:03:41.619749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.329 [2024-11-27 10:03:41.619758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.329 [2024-11-27 10:03:41.619766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.329 [2024-11-27 10:03:41.619783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.329 qpair failed and we were unable to recover it. 00:31:26.329 [2024-11-27 10:03:41.629637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.330 [2024-11-27 10:03:41.629701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.330 [2024-11-27 10:03:41.629717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.330 [2024-11-27 10:03:41.629724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.330 [2024-11-27 10:03:41.629731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.330 [2024-11-27 10:03:41.629747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.330 qpair failed and we were unable to recover it. 00:31:26.330 [2024-11-27 10:03:41.639556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.330 [2024-11-27 10:03:41.639652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.330 [2024-11-27 10:03:41.639670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.330 [2024-11-27 10:03:41.639679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.330 [2024-11-27 10:03:41.639685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.330 [2024-11-27 10:03:41.639701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.330 qpair failed and we were unable to recover it. 00:31:26.330 [2024-11-27 10:03:41.649733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.330 [2024-11-27 10:03:41.649806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.330 [2024-11-27 10:03:41.649822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.330 [2024-11-27 10:03:41.649830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.330 [2024-11-27 10:03:41.649841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.330 [2024-11-27 10:03:41.649858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.330 qpair failed and we were unable to recover it. 00:31:26.330 [2024-11-27 10:03:41.659792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.330 [2024-11-27 10:03:41.659872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.330 [2024-11-27 10:03:41.659888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.330 [2024-11-27 10:03:41.659896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.330 [2024-11-27 10:03:41.659903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.330 [2024-11-27 10:03:41.659919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.330 qpair failed and we were unable to recover it. 00:31:26.330 [2024-11-27 10:03:41.669777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.330 [2024-11-27 10:03:41.669853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.330 [2024-11-27 10:03:41.669888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.330 [2024-11-27 10:03:41.669898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.330 [2024-11-27 10:03:41.669906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.330 [2024-11-27 10:03:41.669930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.330 qpair failed and we were unable to recover it. 00:31:26.330 [2024-11-27 10:03:41.679748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.330 [2024-11-27 10:03:41.679822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.330 [2024-11-27 10:03:41.679858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.330 [2024-11-27 10:03:41.679867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.330 [2024-11-27 10:03:41.679874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.330 [2024-11-27 10:03:41.679898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.330 qpair failed and we were unable to recover it. 00:31:26.330 [2024-11-27 10:03:41.689846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.330 [2024-11-27 10:03:41.689912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.330 [2024-11-27 10:03:41.689930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.330 [2024-11-27 10:03:41.689938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.330 [2024-11-27 10:03:41.689945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.330 [2024-11-27 10:03:41.689964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.330 qpair failed and we were unable to recover it. 00:31:26.330 [2024-11-27 10:03:41.699925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.330 [2024-11-27 10:03:41.700024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.330 [2024-11-27 10:03:41.700060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.330 [2024-11-27 10:03:41.700070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.330 [2024-11-27 10:03:41.700077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.330 [2024-11-27 10:03:41.700102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.330 qpair failed and we were unable to recover it. 00:31:26.330 [2024-11-27 10:03:41.709888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.330 [2024-11-27 10:03:41.709950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.330 [2024-11-27 10:03:41.709970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.330 [2024-11-27 10:03:41.709977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.330 [2024-11-27 10:03:41.709984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.330 [2024-11-27 10:03:41.710003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.330 qpair failed and we were unable to recover it. 00:31:26.330 [2024-11-27 10:03:41.719914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.330 [2024-11-27 10:03:41.719979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.330 [2024-11-27 10:03:41.720014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.330 [2024-11-27 10:03:41.720023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.330 [2024-11-27 10:03:41.720030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.330 [2024-11-27 10:03:41.720054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.330 qpair failed and we were unable to recover it. 00:31:26.330 [2024-11-27 10:03:41.730009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.330 [2024-11-27 10:03:41.730078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.330 [2024-11-27 10:03:41.730098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.330 [2024-11-27 10:03:41.730106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.330 [2024-11-27 10:03:41.730113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.330 [2024-11-27 10:03:41.730132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.330 qpair failed and we were unable to recover it. 00:31:26.330 [2024-11-27 10:03:41.739968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.330 [2024-11-27 10:03:41.740045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.330 [2024-11-27 10:03:41.740069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.330 [2024-11-27 10:03:41.740077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.330 [2024-11-27 10:03:41.740084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.330 [2024-11-27 10:03:41.740101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.330 qpair failed and we were unable to recover it. 00:31:26.330 [2024-11-27 10:03:41.750016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.330 [2024-11-27 10:03:41.750081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.330 [2024-11-27 10:03:41.750098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.330 [2024-11-27 10:03:41.750105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.330 [2024-11-27 10:03:41.750112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.330 [2024-11-27 10:03:41.750129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.330 qpair failed and we were unable to recover it. 00:31:26.330 [2024-11-27 10:03:41.760043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.331 [2024-11-27 10:03:41.760101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.331 [2024-11-27 10:03:41.760117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.331 [2024-11-27 10:03:41.760125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.331 [2024-11-27 10:03:41.760131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.331 [2024-11-27 10:03:41.760148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.331 qpair failed and we were unable to recover it. 00:31:26.331 [2024-11-27 10:03:41.770056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.331 [2024-11-27 10:03:41.770120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.331 [2024-11-27 10:03:41.770137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.331 [2024-11-27 10:03:41.770145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.331 [2024-11-27 10:03:41.770152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.331 [2024-11-27 10:03:41.770174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.331 qpair failed and we were unable to recover it. 00:31:26.331 [2024-11-27 10:03:41.780109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.331 [2024-11-27 10:03:41.780182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.331 [2024-11-27 10:03:41.780200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.331 [2024-11-27 10:03:41.780207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.331 [2024-11-27 10:03:41.780219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.331 [2024-11-27 10:03:41.780236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.331 qpair failed and we were unable to recover it. 00:31:26.331 [2024-11-27 10:03:41.789984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.331 [2024-11-27 10:03:41.790046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.331 [2024-11-27 10:03:41.790063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.331 [2024-11-27 10:03:41.790072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.331 [2024-11-27 10:03:41.790078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.331 [2024-11-27 10:03:41.790095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.331 qpair failed and we were unable to recover it. 00:31:26.593 [2024-11-27 10:03:41.800123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.593 [2024-11-27 10:03:41.800193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.593 [2024-11-27 10:03:41.800210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.593 [2024-11-27 10:03:41.800218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.593 [2024-11-27 10:03:41.800225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.593 [2024-11-27 10:03:41.800242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.593 qpair failed and we were unable to recover it. 00:31:26.593 [2024-11-27 10:03:41.810145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.593 [2024-11-27 10:03:41.810224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.593 [2024-11-27 10:03:41.810241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.593 [2024-11-27 10:03:41.810249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.593 [2024-11-27 10:03:41.810255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.593 [2024-11-27 10:03:41.810272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.593 qpair failed and we were unable to recover it. 00:31:26.593 [2024-11-27 10:03:41.820204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.593 [2024-11-27 10:03:41.820269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.593 [2024-11-27 10:03:41.820286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.593 [2024-11-27 10:03:41.820293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.593 [2024-11-27 10:03:41.820300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.593 [2024-11-27 10:03:41.820317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.593 qpair failed and we were unable to recover it. 00:31:26.593 [2024-11-27 10:03:41.830196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.593 [2024-11-27 10:03:41.830260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.593 [2024-11-27 10:03:41.830276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.593 [2024-11-27 10:03:41.830284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.593 [2024-11-27 10:03:41.830290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.593 [2024-11-27 10:03:41.830307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.593 qpair failed and we were unable to recover it. 00:31:26.593 [2024-11-27 10:03:41.840109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.593 [2024-11-27 10:03:41.840185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.593 [2024-11-27 10:03:41.840201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.593 [2024-11-27 10:03:41.840208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.593 [2024-11-27 10:03:41.840215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.593 [2024-11-27 10:03:41.840231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.593 qpair failed and we were unable to recover it. 00:31:26.593 [2024-11-27 10:03:41.850275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.593 [2024-11-27 10:03:41.850370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.593 [2024-11-27 10:03:41.850386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.593 [2024-11-27 10:03:41.850393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.593 [2024-11-27 10:03:41.850400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.594 [2024-11-27 10:03:41.850416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.594 qpair failed and we were unable to recover it. 00:31:26.594 [2024-11-27 10:03:41.860310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.594 [2024-11-27 10:03:41.860389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.594 [2024-11-27 10:03:41.860405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.594 [2024-11-27 10:03:41.860413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.594 [2024-11-27 10:03:41.860419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.594 [2024-11-27 10:03:41.860435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.594 qpair failed and we were unable to recover it. 00:31:26.594 [2024-11-27 10:03:41.870334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.594 [2024-11-27 10:03:41.870399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.594 [2024-11-27 10:03:41.870421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.594 [2024-11-27 10:03:41.870428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.594 [2024-11-27 10:03:41.870434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.594 [2024-11-27 10:03:41.870451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.594 qpair failed and we were unable to recover it. 00:31:26.594 [2024-11-27 10:03:41.880354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.594 [2024-11-27 10:03:41.880422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.594 [2024-11-27 10:03:41.880438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.594 [2024-11-27 10:03:41.880446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.594 [2024-11-27 10:03:41.880452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.594 [2024-11-27 10:03:41.880468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.594 qpair failed and we were unable to recover it. 00:31:26.594 [2024-11-27 10:03:41.890411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.594 [2024-11-27 10:03:41.890521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.594 [2024-11-27 10:03:41.890538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.594 [2024-11-27 10:03:41.890546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.594 [2024-11-27 10:03:41.890552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.594 [2024-11-27 10:03:41.890569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.594 qpair failed and we were unable to recover it. 00:31:26.594 [2024-11-27 10:03:41.900478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.594 [2024-11-27 10:03:41.900555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.594 [2024-11-27 10:03:41.900571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.594 [2024-11-27 10:03:41.900579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.594 [2024-11-27 10:03:41.900585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.594 [2024-11-27 10:03:41.900601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.594 qpair failed and we were unable to recover it. 00:31:26.594 [2024-11-27 10:03:41.910484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.594 [2024-11-27 10:03:41.910557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.594 [2024-11-27 10:03:41.910573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.594 [2024-11-27 10:03:41.910586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.594 [2024-11-27 10:03:41.910593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.594 [2024-11-27 10:03:41.910609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.594 qpair failed and we were unable to recover it. 00:31:26.594 [2024-11-27 10:03:41.920503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.594 [2024-11-27 10:03:41.920570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.594 [2024-11-27 10:03:41.920586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.594 [2024-11-27 10:03:41.920594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.594 [2024-11-27 10:03:41.920600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.594 [2024-11-27 10:03:41.920616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.594 qpair failed and we were unable to recover it. 00:31:26.594 [2024-11-27 10:03:41.930544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.594 [2024-11-27 10:03:41.930610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.594 [2024-11-27 10:03:41.930626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.594 [2024-11-27 10:03:41.930634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.594 [2024-11-27 10:03:41.930641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.594 [2024-11-27 10:03:41.930657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.594 qpair failed and we were unable to recover it. 00:31:26.594 [2024-11-27 10:03:41.940580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.594 [2024-11-27 10:03:41.940654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.594 [2024-11-27 10:03:41.940671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.594 [2024-11-27 10:03:41.940679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.594 [2024-11-27 10:03:41.940685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.594 [2024-11-27 10:03:41.940702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.594 qpair failed and we were unable to recover it. 00:31:26.594 [2024-11-27 10:03:41.950594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.594 [2024-11-27 10:03:41.950652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.594 [2024-11-27 10:03:41.950668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.594 [2024-11-27 10:03:41.950675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.594 [2024-11-27 10:03:41.950681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.594 [2024-11-27 10:03:41.950708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.594 qpair failed and we were unable to recover it. 00:31:26.594 [2024-11-27 10:03:41.960606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.594 [2024-11-27 10:03:41.960664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.594 [2024-11-27 10:03:41.960680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.594 [2024-11-27 10:03:41.960688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.594 [2024-11-27 10:03:41.960694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.594 [2024-11-27 10:03:41.960711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.594 qpair failed and we were unable to recover it. 00:31:26.594 [2024-11-27 10:03:41.970532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.594 [2024-11-27 10:03:41.970598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.594 [2024-11-27 10:03:41.970619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.594 [2024-11-27 10:03:41.970626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.594 [2024-11-27 10:03:41.970633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.594 [2024-11-27 10:03:41.970651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.594 qpair failed and we were unable to recover it. 00:31:26.594 [2024-11-27 10:03:41.980710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.594 [2024-11-27 10:03:41.980778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.594 [2024-11-27 10:03:41.980798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.594 [2024-11-27 10:03:41.980805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.595 [2024-11-27 10:03:41.980812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.595 [2024-11-27 10:03:41.980830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.595 qpair failed and we were unable to recover it. 00:31:26.595 [2024-11-27 10:03:41.990712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.595 [2024-11-27 10:03:41.990806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.595 [2024-11-27 10:03:41.990822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.595 [2024-11-27 10:03:41.990830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.595 [2024-11-27 10:03:41.990836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.595 [2024-11-27 10:03:41.990852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.595 qpair failed and we were unable to recover it. 00:31:26.595 [2024-11-27 10:03:42.000706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.595 [2024-11-27 10:03:42.000770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.595 [2024-11-27 10:03:42.000796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.595 [2024-11-27 10:03:42.000804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.595 [2024-11-27 10:03:42.000811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.595 [2024-11-27 10:03:42.000830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.595 qpair failed and we were unable to recover it. 00:31:26.595 [2024-11-27 10:03:42.010781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.595 [2024-11-27 10:03:42.010894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.595 [2024-11-27 10:03:42.010911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.595 [2024-11-27 10:03:42.010919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.595 [2024-11-27 10:03:42.010925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.595 [2024-11-27 10:03:42.010943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.595 qpair failed and we were unable to recover it. 00:31:26.595 [2024-11-27 10:03:42.020830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.595 [2024-11-27 10:03:42.020898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.595 [2024-11-27 10:03:42.020915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.595 [2024-11-27 10:03:42.020923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.595 [2024-11-27 10:03:42.020929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.595 [2024-11-27 10:03:42.020946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.595 qpair failed and we were unable to recover it. 00:31:26.595 [2024-11-27 10:03:42.030795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.595 [2024-11-27 10:03:42.030859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.595 [2024-11-27 10:03:42.030876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.595 [2024-11-27 10:03:42.030883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.595 [2024-11-27 10:03:42.030889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.595 [2024-11-27 10:03:42.030906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.595 qpair failed and we were unable to recover it. 00:31:26.595 [2024-11-27 10:03:42.040837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.595 [2024-11-27 10:03:42.040897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.595 [2024-11-27 10:03:42.040914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.595 [2024-11-27 10:03:42.040927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.595 [2024-11-27 10:03:42.040933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.595 [2024-11-27 10:03:42.040950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.595 qpair failed and we were unable to recover it. 00:31:26.595 [2024-11-27 10:03:42.050912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.595 [2024-11-27 10:03:42.051027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.595 [2024-11-27 10:03:42.051045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.595 [2024-11-27 10:03:42.051052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.595 [2024-11-27 10:03:42.051059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.595 [2024-11-27 10:03:42.051076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.595 qpair failed and we were unable to recover it. 00:31:26.857 [2024-11-27 10:03:42.060949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.857 [2024-11-27 10:03:42.061028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.857 [2024-11-27 10:03:42.061044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.857 [2024-11-27 10:03:42.061052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.857 [2024-11-27 10:03:42.061058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.857 [2024-11-27 10:03:42.061075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.857 qpair failed and we were unable to recover it. 00:31:26.857 [2024-11-27 10:03:42.070955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.857 [2024-11-27 10:03:42.071038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.857 [2024-11-27 10:03:42.071055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.857 [2024-11-27 10:03:42.071063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.857 [2024-11-27 10:03:42.071070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.857 [2024-11-27 10:03:42.071086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.857 qpair failed and we were unable to recover it. 00:31:26.857 [2024-11-27 10:03:42.080955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.857 [2024-11-27 10:03:42.081022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.857 [2024-11-27 10:03:42.081039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.857 [2024-11-27 10:03:42.081047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.857 [2024-11-27 10:03:42.081054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.857 [2024-11-27 10:03:42.081077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.857 qpair failed and we were unable to recover it. 00:31:26.857 [2024-11-27 10:03:42.091005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.857 [2024-11-27 10:03:42.091070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.857 [2024-11-27 10:03:42.091087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.857 [2024-11-27 10:03:42.091094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.857 [2024-11-27 10:03:42.091101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.857 [2024-11-27 10:03:42.091119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.858 [2024-11-27 10:03:42.100965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.858 [2024-11-27 10:03:42.101039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.858 [2024-11-27 10:03:42.101056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.858 [2024-11-27 10:03:42.101064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.858 [2024-11-27 10:03:42.101070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.858 [2024-11-27 10:03:42.101087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.858 [2024-11-27 10:03:42.111050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.858 [2024-11-27 10:03:42.111121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.858 [2024-11-27 10:03:42.111138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.858 [2024-11-27 10:03:42.111145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.858 [2024-11-27 10:03:42.111152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.858 [2024-11-27 10:03:42.111174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.858 [2024-11-27 10:03:42.121069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.858 [2024-11-27 10:03:42.121134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.858 [2024-11-27 10:03:42.121150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.858 [2024-11-27 10:03:42.121163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.858 [2024-11-27 10:03:42.121170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.858 [2024-11-27 10:03:42.121187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.858 [2024-11-27 10:03:42.131121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.858 [2024-11-27 10:03:42.131202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.858 [2024-11-27 10:03:42.131219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.858 [2024-11-27 10:03:42.131226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.858 [2024-11-27 10:03:42.131233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.858 [2024-11-27 10:03:42.131249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.858 [2024-11-27 10:03:42.141200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.858 [2024-11-27 10:03:42.141275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.858 [2024-11-27 10:03:42.141292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.858 [2024-11-27 10:03:42.141299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.858 [2024-11-27 10:03:42.141306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.858 [2024-11-27 10:03:42.141323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.858 [2024-11-27 10:03:42.151186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.858 [2024-11-27 10:03:42.151253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.858 [2024-11-27 10:03:42.151269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.858 [2024-11-27 10:03:42.151276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.858 [2024-11-27 10:03:42.151283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.858 [2024-11-27 10:03:42.151300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.858 [2024-11-27 10:03:42.161199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.858 [2024-11-27 10:03:42.161261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.858 [2024-11-27 10:03:42.161277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.858 [2024-11-27 10:03:42.161285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.858 [2024-11-27 10:03:42.161291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.858 [2024-11-27 10:03:42.161308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.858 [2024-11-27 10:03:42.171250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.858 [2024-11-27 10:03:42.171319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.858 [2024-11-27 10:03:42.171340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.858 [2024-11-27 10:03:42.171348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.858 [2024-11-27 10:03:42.171354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.858 [2024-11-27 10:03:42.171371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.858 [2024-11-27 10:03:42.181280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.858 [2024-11-27 10:03:42.181367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.858 [2024-11-27 10:03:42.181384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.858 [2024-11-27 10:03:42.181391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.858 [2024-11-27 10:03:42.181398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.858 [2024-11-27 10:03:42.181414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.858 [2024-11-27 10:03:42.191299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.858 [2024-11-27 10:03:42.191392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.858 [2024-11-27 10:03:42.191408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.858 [2024-11-27 10:03:42.191415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.858 [2024-11-27 10:03:42.191422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.858 [2024-11-27 10:03:42.191439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.858 [2024-11-27 10:03:42.201318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.858 [2024-11-27 10:03:42.201391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.858 [2024-11-27 10:03:42.201409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.858 [2024-11-27 10:03:42.201416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.858 [2024-11-27 10:03:42.201423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.858 [2024-11-27 10:03:42.201439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.858 [2024-11-27 10:03:42.211355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.858 [2024-11-27 10:03:42.211423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.858 [2024-11-27 10:03:42.211439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.858 [2024-11-27 10:03:42.211446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.858 [2024-11-27 10:03:42.211458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.858 [2024-11-27 10:03:42.211475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.858 [2024-11-27 10:03:42.221379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.858 [2024-11-27 10:03:42.221450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.858 [2024-11-27 10:03:42.221468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.858 [2024-11-27 10:03:42.221476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.858 [2024-11-27 10:03:42.221482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.858 [2024-11-27 10:03:42.221499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.858 qpair failed and we were unable to recover it. 00:31:26.859 [2024-11-27 10:03:42.231421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.859 [2024-11-27 10:03:42.231480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.859 [2024-11-27 10:03:42.231497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.859 [2024-11-27 10:03:42.231504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.859 [2024-11-27 10:03:42.231511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.859 [2024-11-27 10:03:42.231527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.859 qpair failed and we were unable to recover it. 00:31:26.859 [2024-11-27 10:03:42.241462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.859 [2024-11-27 10:03:42.241527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.859 [2024-11-27 10:03:42.241547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.859 [2024-11-27 10:03:42.241555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.859 [2024-11-27 10:03:42.241563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.859 [2024-11-27 10:03:42.241584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.859 qpair failed and we were unable to recover it. 00:31:26.859 [2024-11-27 10:03:42.251363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.859 [2024-11-27 10:03:42.251426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.859 [2024-11-27 10:03:42.251444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.859 [2024-11-27 10:03:42.251451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.859 [2024-11-27 10:03:42.251457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.859 [2024-11-27 10:03:42.251474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.859 qpair failed and we were unable to recover it. 00:31:26.859 [2024-11-27 10:03:42.261533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.859 [2024-11-27 10:03:42.261602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.859 [2024-11-27 10:03:42.261619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.859 [2024-11-27 10:03:42.261626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.859 [2024-11-27 10:03:42.261633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.859 [2024-11-27 10:03:42.261649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.859 qpair failed and we were unable to recover it. 00:31:26.859 [2024-11-27 10:03:42.271500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.859 [2024-11-27 10:03:42.271562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.859 [2024-11-27 10:03:42.271579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.859 [2024-11-27 10:03:42.271587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.859 [2024-11-27 10:03:42.271593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.859 [2024-11-27 10:03:42.271609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.859 qpair failed and we were unable to recover it. 00:31:26.859 [2024-11-27 10:03:42.281575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.859 [2024-11-27 10:03:42.281638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.859 [2024-11-27 10:03:42.281654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.859 [2024-11-27 10:03:42.281662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.859 [2024-11-27 10:03:42.281668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.859 [2024-11-27 10:03:42.281685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.859 qpair failed and we were unable to recover it. 00:31:26.859 [2024-11-27 10:03:42.291619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.859 [2024-11-27 10:03:42.291683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.859 [2024-11-27 10:03:42.291699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.859 [2024-11-27 10:03:42.291707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.859 [2024-11-27 10:03:42.291713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.859 [2024-11-27 10:03:42.291729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.859 qpair failed and we were unable to recover it. 00:31:26.859 [2024-11-27 10:03:42.301625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.859 [2024-11-27 10:03:42.301698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.859 [2024-11-27 10:03:42.301720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.859 [2024-11-27 10:03:42.301727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.859 [2024-11-27 10:03:42.301734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.859 [2024-11-27 10:03:42.301751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.859 qpair failed and we were unable to recover it. 00:31:26.859 [2024-11-27 10:03:42.311643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.859 [2024-11-27 10:03:42.311700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.859 [2024-11-27 10:03:42.311717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.859 [2024-11-27 10:03:42.311724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.859 [2024-11-27 10:03:42.311731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:26.859 [2024-11-27 10:03:42.311747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.859 qpair failed and we were unable to recover it. 00:31:26.859 [2024-11-27 10:03:42.321668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.122 [2024-11-27 10:03:42.321724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.122 [2024-11-27 10:03:42.321743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.122 [2024-11-27 10:03:42.321752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.122 [2024-11-27 10:03:42.321761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.122 [2024-11-27 10:03:42.321780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.122 qpair failed and we were unable to recover it. 00:31:27.122 [2024-11-27 10:03:42.331718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.122 [2024-11-27 10:03:42.331784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.122 [2024-11-27 10:03:42.331800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.122 [2024-11-27 10:03:42.331808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.122 [2024-11-27 10:03:42.331814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.122 [2024-11-27 10:03:42.331830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.122 qpair failed and we were unable to recover it. 00:31:27.122 [2024-11-27 10:03:42.341769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.122 [2024-11-27 10:03:42.341842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.122 [2024-11-27 10:03:42.341858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.122 [2024-11-27 10:03:42.341865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.122 [2024-11-27 10:03:42.341877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.341893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.123 qpair failed and we were unable to recover it. 00:31:27.123 [2024-11-27 10:03:42.351724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.123 [2024-11-27 10:03:42.351784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.123 [2024-11-27 10:03:42.351801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.123 [2024-11-27 10:03:42.351809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.123 [2024-11-27 10:03:42.351815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.351832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.123 qpair failed and we were unable to recover it. 00:31:27.123 [2024-11-27 10:03:42.361820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.123 [2024-11-27 10:03:42.361885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.123 [2024-11-27 10:03:42.361902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.123 [2024-11-27 10:03:42.361909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.123 [2024-11-27 10:03:42.361915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.361932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.123 qpair failed and we were unable to recover it. 00:31:27.123 [2024-11-27 10:03:42.371855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.123 [2024-11-27 10:03:42.371928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.123 [2024-11-27 10:03:42.371964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.123 [2024-11-27 10:03:42.371973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.123 [2024-11-27 10:03:42.371982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.372007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.123 qpair failed and we were unable to recover it. 00:31:27.123 [2024-11-27 10:03:42.381894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.123 [2024-11-27 10:03:42.381967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.123 [2024-11-27 10:03:42.381990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.123 [2024-11-27 10:03:42.381998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.123 [2024-11-27 10:03:42.382005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.382024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.123 qpair failed and we were unable to recover it. 00:31:27.123 [2024-11-27 10:03:42.391908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.123 [2024-11-27 10:03:42.391984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.123 [2024-11-27 10:03:42.392001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.123 [2024-11-27 10:03:42.392009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.123 [2024-11-27 10:03:42.392016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.392033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.123 qpair failed and we were unable to recover it. 00:31:27.123 [2024-11-27 10:03:42.401929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.123 [2024-11-27 10:03:42.402005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.123 [2024-11-27 10:03:42.402023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.123 [2024-11-27 10:03:42.402030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.123 [2024-11-27 10:03:42.402037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.402054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.123 qpair failed and we were unable to recover it. 00:31:27.123 [2024-11-27 10:03:42.411937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.123 [2024-11-27 10:03:42.412001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.123 [2024-11-27 10:03:42.412017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.123 [2024-11-27 10:03:42.412025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.123 [2024-11-27 10:03:42.412031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.412048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.123 qpair failed and we were unable to recover it. 00:31:27.123 [2024-11-27 10:03:42.422003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.123 [2024-11-27 10:03:42.422077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.123 [2024-11-27 10:03:42.422094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.123 [2024-11-27 10:03:42.422101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.123 [2024-11-27 10:03:42.422108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.422124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.123 qpair failed and we were unable to recover it. 00:31:27.123 [2024-11-27 10:03:42.432019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.123 [2024-11-27 10:03:42.432096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.123 [2024-11-27 10:03:42.432118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.123 [2024-11-27 10:03:42.432125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.123 [2024-11-27 10:03:42.432131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.432148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.123 qpair failed and we were unable to recover it. 00:31:27.123 [2024-11-27 10:03:42.442045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.123 [2024-11-27 10:03:42.442101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.123 [2024-11-27 10:03:42.442117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.123 [2024-11-27 10:03:42.442124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.123 [2024-11-27 10:03:42.442131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.442147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.123 qpair failed and we were unable to recover it. 00:31:27.123 [2024-11-27 10:03:42.452099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.123 [2024-11-27 10:03:42.452171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.123 [2024-11-27 10:03:42.452188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.123 [2024-11-27 10:03:42.452195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.123 [2024-11-27 10:03:42.452201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.452218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.123 qpair failed and we were unable to recover it. 00:31:27.123 [2024-11-27 10:03:42.462148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.123 [2024-11-27 10:03:42.462213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.123 [2024-11-27 10:03:42.462229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.123 [2024-11-27 10:03:42.462236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.123 [2024-11-27 10:03:42.462243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.462259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.123 qpair failed and we were unable to recover it. 00:31:27.123 [2024-11-27 10:03:42.472128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.123 [2024-11-27 10:03:42.472193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.123 [2024-11-27 10:03:42.472209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.123 [2024-11-27 10:03:42.472221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.123 [2024-11-27 10:03:42.472228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.123 [2024-11-27 10:03:42.472245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.124 qpair failed and we were unable to recover it. 00:31:27.124 [2024-11-27 10:03:42.482163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.124 [2024-11-27 10:03:42.482219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.124 [2024-11-27 10:03:42.482235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.124 [2024-11-27 10:03:42.482243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.124 [2024-11-27 10:03:42.482249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.124 [2024-11-27 10:03:42.482266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.124 qpair failed and we were unable to recover it. 00:31:27.124 [2024-11-27 10:03:42.492225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.124 [2024-11-27 10:03:42.492292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.124 [2024-11-27 10:03:42.492308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.124 [2024-11-27 10:03:42.492316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.124 [2024-11-27 10:03:42.492322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.124 [2024-11-27 10:03:42.492339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.124 qpair failed and we were unable to recover it. 00:31:27.124 [2024-11-27 10:03:42.502267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.124 [2024-11-27 10:03:42.502332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.124 [2024-11-27 10:03:42.502349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.124 [2024-11-27 10:03:42.502357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.124 [2024-11-27 10:03:42.502363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.124 [2024-11-27 10:03:42.502381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.124 qpair failed and we were unable to recover it. 00:31:27.124 [2024-11-27 10:03:42.512260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.124 [2024-11-27 10:03:42.512327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.124 [2024-11-27 10:03:42.512343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.124 [2024-11-27 10:03:42.512350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.124 [2024-11-27 10:03:42.512357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.124 [2024-11-27 10:03:42.512379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.124 qpair failed and we were unable to recover it. 00:31:27.124 [2024-11-27 10:03:42.522270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.124 [2024-11-27 10:03:42.522335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.124 [2024-11-27 10:03:42.522351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.124 [2024-11-27 10:03:42.522358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.124 [2024-11-27 10:03:42.522365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.124 [2024-11-27 10:03:42.522381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.124 qpair failed and we were unable to recover it. 00:31:27.124 [2024-11-27 10:03:42.532332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.124 [2024-11-27 10:03:42.532399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.124 [2024-11-27 10:03:42.532415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.124 [2024-11-27 10:03:42.532423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.124 [2024-11-27 10:03:42.532429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.124 [2024-11-27 10:03:42.532445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.124 qpair failed and we were unable to recover it. 00:31:27.124 [2024-11-27 10:03:42.542431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.124 [2024-11-27 10:03:42.542496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.124 [2024-11-27 10:03:42.542512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.124 [2024-11-27 10:03:42.542519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.124 [2024-11-27 10:03:42.542525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.124 [2024-11-27 10:03:42.542542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.124 qpair failed and we were unable to recover it. 00:31:27.124 [2024-11-27 10:03:42.552357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.124 [2024-11-27 10:03:42.552422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.124 [2024-11-27 10:03:42.552438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.124 [2024-11-27 10:03:42.552445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.124 [2024-11-27 10:03:42.552452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.124 [2024-11-27 10:03:42.552468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.124 qpair failed and we were unable to recover it. 00:31:27.124 [2024-11-27 10:03:42.562397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.124 [2024-11-27 10:03:42.562461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.124 [2024-11-27 10:03:42.562477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.124 [2024-11-27 10:03:42.562484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.124 [2024-11-27 10:03:42.562490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.124 [2024-11-27 10:03:42.562507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.124 qpair failed and we were unable to recover it. 00:31:27.124 [2024-11-27 10:03:42.572475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.124 [2024-11-27 10:03:42.572539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.124 [2024-11-27 10:03:42.572555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.124 [2024-11-27 10:03:42.572564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.124 [2024-11-27 10:03:42.572571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.124 [2024-11-27 10:03:42.572589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.124 qpair failed and we were unable to recover it. 00:31:27.124 [2024-11-27 10:03:42.582507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.124 [2024-11-27 10:03:42.582577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.124 [2024-11-27 10:03:42.582593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.124 [2024-11-27 10:03:42.582601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.124 [2024-11-27 10:03:42.582607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.124 [2024-11-27 10:03:42.582624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.124 qpair failed and we were unable to recover it. 00:31:27.387 [2024-11-27 10:03:42.592370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.387 [2024-11-27 10:03:42.592438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.387 [2024-11-27 10:03:42.592455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.387 [2024-11-27 10:03:42.592463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.387 [2024-11-27 10:03:42.592469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.387 [2024-11-27 10:03:42.592486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.387 qpair failed and we were unable to recover it. 00:31:27.387 [2024-11-27 10:03:42.602512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.387 [2024-11-27 10:03:42.602573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.387 [2024-11-27 10:03:42.602589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.387 [2024-11-27 10:03:42.602603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.387 [2024-11-27 10:03:42.602609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.387 [2024-11-27 10:03:42.602627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.387 qpair failed and we were unable to recover it. 00:31:27.387 [2024-11-27 10:03:42.612537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.387 [2024-11-27 10:03:42.612601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.387 [2024-11-27 10:03:42.612617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.387 [2024-11-27 10:03:42.612624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.387 [2024-11-27 10:03:42.612631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.387 [2024-11-27 10:03:42.612647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.387 qpair failed and we were unable to recover it. 00:31:27.387 [2024-11-27 10:03:42.622611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.387 [2024-11-27 10:03:42.622706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.387 [2024-11-27 10:03:42.622722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.387 [2024-11-27 10:03:42.622729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.387 [2024-11-27 10:03:42.622736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.387 [2024-11-27 10:03:42.622752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.387 qpair failed and we were unable to recover it. 00:31:27.387 [2024-11-27 10:03:42.632598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.387 [2024-11-27 10:03:42.632666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.387 [2024-11-27 10:03:42.632681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.387 [2024-11-27 10:03:42.632688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.387 [2024-11-27 10:03:42.632695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.387 [2024-11-27 10:03:42.632711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.387 qpair failed and we were unable to recover it. 00:31:27.387 [2024-11-27 10:03:42.642633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.387 [2024-11-27 10:03:42.642696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.387 [2024-11-27 10:03:42.642712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.387 [2024-11-27 10:03:42.642722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.387 [2024-11-27 10:03:42.642732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.387 [2024-11-27 10:03:42.642755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.387 qpair failed and we were unable to recover it. 00:31:27.387 [2024-11-27 10:03:42.652584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.387 [2024-11-27 10:03:42.652652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.387 [2024-11-27 10:03:42.652669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.387 [2024-11-27 10:03:42.652676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.387 [2024-11-27 10:03:42.652682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.387 [2024-11-27 10:03:42.652699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.387 qpair failed and we were unable to recover it. 00:31:27.387 [2024-11-27 10:03:42.662729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.387 [2024-11-27 10:03:42.662807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.387 [2024-11-27 10:03:42.662822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.387 [2024-11-27 10:03:42.662830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.387 [2024-11-27 10:03:42.662836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.387 [2024-11-27 10:03:42.662854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.387 qpair failed and we were unable to recover it. 00:31:27.387 [2024-11-27 10:03:42.672693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.387 [2024-11-27 10:03:42.672752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.387 [2024-11-27 10:03:42.672769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.387 [2024-11-27 10:03:42.672777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.387 [2024-11-27 10:03:42.672784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.387 [2024-11-27 10:03:42.672800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.387 qpair failed and we were unable to recover it. 00:31:27.387 [2024-11-27 10:03:42.682763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.387 [2024-11-27 10:03:42.682840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.387 [2024-11-27 10:03:42.682872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.387 [2024-11-27 10:03:42.682881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.387 [2024-11-27 10:03:42.682889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.387 [2024-11-27 10:03:42.682913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.387 qpair failed and we were unable to recover it. 00:31:27.387 [2024-11-27 10:03:42.692802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.387 [2024-11-27 10:03:42.692872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.387 [2024-11-27 10:03:42.692890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.387 [2024-11-27 10:03:42.692898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.387 [2024-11-27 10:03:42.692904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.387 [2024-11-27 10:03:42.692922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.387 qpair failed and we were unable to recover it. 00:31:27.387 [2024-11-27 10:03:42.702863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.387 [2024-11-27 10:03:42.702936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.387 [2024-11-27 10:03:42.702953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.387 [2024-11-27 10:03:42.702960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.387 [2024-11-27 10:03:42.702967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.387 [2024-11-27 10:03:42.702984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.387 qpair failed and we were unable to recover it. 00:31:27.387 [2024-11-27 10:03:42.712854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.387 [2024-11-27 10:03:42.712949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.387 [2024-11-27 10:03:42.712965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.387 [2024-11-27 10:03:42.712972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.387 [2024-11-27 10:03:42.712979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.388 [2024-11-27 10:03:42.712995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.388 qpair failed and we were unable to recover it. 00:31:27.388 [2024-11-27 10:03:42.722875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.388 [2024-11-27 10:03:42.722933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.388 [2024-11-27 10:03:42.722951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.388 [2024-11-27 10:03:42.722960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.388 [2024-11-27 10:03:42.722966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.388 [2024-11-27 10:03:42.722982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.388 qpair failed and we were unable to recover it. 00:31:27.388 [2024-11-27 10:03:42.732949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.388 [2024-11-27 10:03:42.733016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.388 [2024-11-27 10:03:42.733039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.388 [2024-11-27 10:03:42.733046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.388 [2024-11-27 10:03:42.733053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.388 [2024-11-27 10:03:42.733070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.388 qpair failed and we were unable to recover it. 00:31:27.388 [2024-11-27 10:03:42.742976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.388 [2024-11-27 10:03:42.743084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.388 [2024-11-27 10:03:42.743101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.388 [2024-11-27 10:03:42.743109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.388 [2024-11-27 10:03:42.743115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.388 [2024-11-27 10:03:42.743133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.388 qpair failed and we were unable to recover it. 00:31:27.388 [2024-11-27 10:03:42.752978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.388 [2024-11-27 10:03:42.753036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.388 [2024-11-27 10:03:42.753052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.388 [2024-11-27 10:03:42.753059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.388 [2024-11-27 10:03:42.753066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.388 [2024-11-27 10:03:42.753082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.388 qpair failed and we were unable to recover it. 00:31:27.388 [2024-11-27 10:03:42.762983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.388 [2024-11-27 10:03:42.763047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.388 [2024-11-27 10:03:42.763064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.388 [2024-11-27 10:03:42.763071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.388 [2024-11-27 10:03:42.763078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.388 [2024-11-27 10:03:42.763094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.388 qpair failed and we were unable to recover it. 00:31:27.388 [2024-11-27 10:03:42.773053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.388 [2024-11-27 10:03:42.773117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.388 [2024-11-27 10:03:42.773133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.388 [2024-11-27 10:03:42.773140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.388 [2024-11-27 10:03:42.773151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.388 [2024-11-27 10:03:42.773172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.388 qpair failed and we were unable to recover it. 00:31:27.388 [2024-11-27 10:03:42.783087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.388 [2024-11-27 10:03:42.783156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.388 [2024-11-27 10:03:42.783177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.388 [2024-11-27 10:03:42.783184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.388 [2024-11-27 10:03:42.783191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.388 [2024-11-27 10:03:42.783207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.388 qpair failed and we were unable to recover it. 00:31:27.388 [2024-11-27 10:03:42.793000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.388 [2024-11-27 10:03:42.793050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.388 [2024-11-27 10:03:42.793066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.388 [2024-11-27 10:03:42.793074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.388 [2024-11-27 10:03:42.793080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.388 [2024-11-27 10:03:42.793096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.388 qpair failed and we were unable to recover it. 00:31:27.388 [2024-11-27 10:03:42.803095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.388 [2024-11-27 10:03:42.803151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.388 [2024-11-27 10:03:42.803170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.388 [2024-11-27 10:03:42.803178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.388 [2024-11-27 10:03:42.803184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.388 [2024-11-27 10:03:42.803200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.388 qpair failed and we were unable to recover it. 00:31:27.388 [2024-11-27 10:03:42.813143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.388 [2024-11-27 10:03:42.813210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.388 [2024-11-27 10:03:42.813225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.388 [2024-11-27 10:03:42.813232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.388 [2024-11-27 10:03:42.813239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.388 [2024-11-27 10:03:42.813254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.388 qpair failed and we were unable to recover it. 00:31:27.388 [2024-11-27 10:03:42.823068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.388 [2024-11-27 10:03:42.823129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.388 [2024-11-27 10:03:42.823146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.388 [2024-11-27 10:03:42.823154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.388 [2024-11-27 10:03:42.823167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.388 [2024-11-27 10:03:42.823184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.388 qpair failed and we were unable to recover it. 00:31:27.388 [2024-11-27 10:03:42.833138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.388 [2024-11-27 10:03:42.833198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.388 [2024-11-27 10:03:42.833213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.388 [2024-11-27 10:03:42.833220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.388 [2024-11-27 10:03:42.833226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.388 [2024-11-27 10:03:42.833242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.388 qpair failed and we were unable to recover it. 00:31:27.388 [2024-11-27 10:03:42.843096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.388 [2024-11-27 10:03:42.843147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.388 [2024-11-27 10:03:42.843165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.388 [2024-11-27 10:03:42.843173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.388 [2024-11-27 10:03:42.843179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.389 [2024-11-27 10:03:42.843195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.389 qpair failed and we were unable to recover it. 00:31:27.650 [2024-11-27 10:03:42.853248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.650 [2024-11-27 10:03:42.853347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.650 [2024-11-27 10:03:42.853361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.650 [2024-11-27 10:03:42.853369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.650 [2024-11-27 10:03:42.853375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.650 [2024-11-27 10:03:42.853390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.650 qpair failed and we were unable to recover it. 00:31:27.650 [2024-11-27 10:03:42.863295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.650 [2024-11-27 10:03:42.863361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.650 [2024-11-27 10:03:42.863379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.650 [2024-11-27 10:03:42.863386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.650 [2024-11-27 10:03:42.863393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.650 [2024-11-27 10:03:42.863408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.650 qpair failed and we were unable to recover it. 00:31:27.650 [2024-11-27 10:03:42.873263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.650 [2024-11-27 10:03:42.873311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.650 [2024-11-27 10:03:42.873325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.650 [2024-11-27 10:03:42.873332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.650 [2024-11-27 10:03:42.873338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.650 [2024-11-27 10:03:42.873353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.650 qpair failed and we were unable to recover it. 00:31:27.650 [2024-11-27 10:03:42.883308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.650 [2024-11-27 10:03:42.883361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.650 [2024-11-27 10:03:42.883375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.650 [2024-11-27 10:03:42.883382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.650 [2024-11-27 10:03:42.883388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.650 [2024-11-27 10:03:42.883403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.650 qpair failed and we were unable to recover it. 00:31:27.650 [2024-11-27 10:03:42.893257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.650 [2024-11-27 10:03:42.893312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.650 [2024-11-27 10:03:42.893326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.650 [2024-11-27 10:03:42.893333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.650 [2024-11-27 10:03:42.893339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.650 [2024-11-27 10:03:42.893354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.650 qpair failed and we were unable to recover it. 00:31:27.650 [2024-11-27 10:03:42.903385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.650 [2024-11-27 10:03:42.903445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.650 [2024-11-27 10:03:42.903459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.650 [2024-11-27 10:03:42.903466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.650 [2024-11-27 10:03:42.903476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.650 [2024-11-27 10:03:42.903491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.650 qpair failed and we were unable to recover it. 00:31:27.650 [2024-11-27 10:03:42.913358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.651 [2024-11-27 10:03:42.913406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.651 [2024-11-27 10:03:42.913419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.651 [2024-11-27 10:03:42.913426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.651 [2024-11-27 10:03:42.913432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.651 [2024-11-27 10:03:42.913446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.651 qpair failed and we were unable to recover it. 00:31:27.651 [2024-11-27 10:03:42.923401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.651 [2024-11-27 10:03:42.923460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.651 [2024-11-27 10:03:42.923476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.651 [2024-11-27 10:03:42.923483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.651 [2024-11-27 10:03:42.923490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.651 [2024-11-27 10:03:42.923509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.651 qpair failed and we were unable to recover it. 00:31:27.651 [2024-11-27 10:03:42.933456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.651 [2024-11-27 10:03:42.933512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.651 [2024-11-27 10:03:42.933526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.651 [2024-11-27 10:03:42.933533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.651 [2024-11-27 10:03:42.933539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.651 [2024-11-27 10:03:42.933554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.651 qpair failed and we were unable to recover it. 00:31:27.651 [2024-11-27 10:03:42.943363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.651 [2024-11-27 10:03:42.943424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.651 [2024-11-27 10:03:42.943437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.651 [2024-11-27 10:03:42.943444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.651 [2024-11-27 10:03:42.943450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.651 [2024-11-27 10:03:42.943465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.651 qpair failed and we were unable to recover it. 00:31:27.651 [2024-11-27 10:03:42.953469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.651 [2024-11-27 10:03:42.953514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.651 [2024-11-27 10:03:42.953528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.651 [2024-11-27 10:03:42.953535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.651 [2024-11-27 10:03:42.953541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.651 [2024-11-27 10:03:42.953555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.651 qpair failed and we were unable to recover it. 00:31:27.651 [2024-11-27 10:03:42.963536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.651 [2024-11-27 10:03:42.963594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.651 [2024-11-27 10:03:42.963607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.651 [2024-11-27 10:03:42.963614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.651 [2024-11-27 10:03:42.963620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.651 [2024-11-27 10:03:42.963634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.651 qpair failed and we were unable to recover it. 00:31:27.651 [2024-11-27 10:03:42.973560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.651 [2024-11-27 10:03:42.973616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.651 [2024-11-27 10:03:42.973629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.651 [2024-11-27 10:03:42.973636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.651 [2024-11-27 10:03:42.973642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.651 [2024-11-27 10:03:42.973656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.651 qpair failed and we were unable to recover it. 00:31:27.651 [2024-11-27 10:03:42.983595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.651 [2024-11-27 10:03:42.983649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.651 [2024-11-27 10:03:42.983662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.651 [2024-11-27 10:03:42.983669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.651 [2024-11-27 10:03:42.983676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.651 [2024-11-27 10:03:42.983690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.651 qpair failed and we were unable to recover it. 00:31:27.651 [2024-11-27 10:03:42.993579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.651 [2024-11-27 10:03:42.993627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.651 [2024-11-27 10:03:42.993643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.651 [2024-11-27 10:03:42.993650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.651 [2024-11-27 10:03:42.993656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.651 [2024-11-27 10:03:42.993670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.651 qpair failed and we were unable to recover it. 00:31:27.651 [2024-11-27 10:03:43.003636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.651 [2024-11-27 10:03:43.003686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.651 [2024-11-27 10:03:43.003699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.651 [2024-11-27 10:03:43.003706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.651 [2024-11-27 10:03:43.003713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.651 [2024-11-27 10:03:43.003727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.651 qpair failed and we were unable to recover it. 00:31:27.651 [2024-11-27 10:03:43.013675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.651 [2024-11-27 10:03:43.013758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.651 [2024-11-27 10:03:43.013771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.651 [2024-11-27 10:03:43.013778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.651 [2024-11-27 10:03:43.013784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.652 [2024-11-27 10:03:43.013798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.652 qpair failed and we were unable to recover it. 00:31:27.652 [2024-11-27 10:03:43.023708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.652 [2024-11-27 10:03:43.023760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.652 [2024-11-27 10:03:43.023773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.652 [2024-11-27 10:03:43.023780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.652 [2024-11-27 10:03:43.023786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.652 [2024-11-27 10:03:43.023800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.652 qpair failed and we were unable to recover it. 00:31:27.652 [2024-11-27 10:03:43.033693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.652 [2024-11-27 10:03:43.033739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.652 [2024-11-27 10:03:43.033751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.652 [2024-11-27 10:03:43.033764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.652 [2024-11-27 10:03:43.033770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.652 [2024-11-27 10:03:43.033784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.652 qpair failed and we were unable to recover it. 00:31:27.652 [2024-11-27 10:03:43.043760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.652 [2024-11-27 10:03:43.043805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.652 [2024-11-27 10:03:43.043818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.652 [2024-11-27 10:03:43.043825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.652 [2024-11-27 10:03:43.043831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.652 [2024-11-27 10:03:43.043845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.652 qpair failed and we were unable to recover it. 00:31:27.652 [2024-11-27 10:03:43.053778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.652 [2024-11-27 10:03:43.053831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.652 [2024-11-27 10:03:43.053843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.652 [2024-11-27 10:03:43.053850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.652 [2024-11-27 10:03:43.053856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.652 [2024-11-27 10:03:43.053870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.652 qpair failed and we were unable to recover it. 00:31:27.652 [2024-11-27 10:03:43.063747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.652 [2024-11-27 10:03:43.063843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.652 [2024-11-27 10:03:43.063856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.652 [2024-11-27 10:03:43.063863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.652 [2024-11-27 10:03:43.063869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.652 [2024-11-27 10:03:43.063884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.652 qpair failed and we were unable to recover it. 00:31:27.652 [2024-11-27 10:03:43.073687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.652 [2024-11-27 10:03:43.073737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.652 [2024-11-27 10:03:43.073751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.652 [2024-11-27 10:03:43.073758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.652 [2024-11-27 10:03:43.073765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.652 [2024-11-27 10:03:43.073784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.652 qpair failed and we were unable to recover it. 00:31:27.652 [2024-11-27 10:03:43.083758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.652 [2024-11-27 10:03:43.083813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.652 [2024-11-27 10:03:43.083825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.652 [2024-11-27 10:03:43.083833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.652 [2024-11-27 10:03:43.083839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.652 [2024-11-27 10:03:43.083853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.652 qpair failed and we were unable to recover it. 00:31:27.652 [2024-11-27 10:03:43.093899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.652 [2024-11-27 10:03:43.093956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.652 [2024-11-27 10:03:43.093969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.652 [2024-11-27 10:03:43.093976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.652 [2024-11-27 10:03:43.093982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.652 [2024-11-27 10:03:43.093996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.652 qpair failed and we were unable to recover it. 00:31:27.652 [2024-11-27 10:03:43.103935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.652 [2024-11-27 10:03:43.103994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.652 [2024-11-27 10:03:43.104009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.652 [2024-11-27 10:03:43.104017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.652 [2024-11-27 10:03:43.104023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.652 [2024-11-27 10:03:43.104041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.652 qpair failed and we were unable to recover it. 00:31:27.652 [2024-11-27 10:03:43.113911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.652 [2024-11-27 10:03:43.113956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.652 [2024-11-27 10:03:43.113969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.652 [2024-11-27 10:03:43.113976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.652 [2024-11-27 10:03:43.113983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.652 [2024-11-27 10:03:43.113997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.652 qpair failed and we were unable to recover it. 00:31:27.914 [2024-11-27 10:03:43.123958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.914 [2024-11-27 10:03:43.124015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.914 [2024-11-27 10:03:43.124028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.914 [2024-11-27 10:03:43.124035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.914 [2024-11-27 10:03:43.124041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.914 [2024-11-27 10:03:43.124056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.914 qpair failed and we were unable to recover it. 00:31:27.914 [2024-11-27 10:03:43.133991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.914 [2024-11-27 10:03:43.134044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.914 [2024-11-27 10:03:43.134057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.914 [2024-11-27 10:03:43.134064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.914 [2024-11-27 10:03:43.134070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.914 [2024-11-27 10:03:43.134085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.914 qpair failed and we were unable to recover it. 00:31:27.914 [2024-11-27 10:03:43.144001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.914 [2024-11-27 10:03:43.144051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.914 [2024-11-27 10:03:43.144064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.914 [2024-11-27 10:03:43.144071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.914 [2024-11-27 10:03:43.144077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.914 [2024-11-27 10:03:43.144091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.914 qpair failed and we were unable to recover it. 00:31:27.914 [2024-11-27 10:03:43.154016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.914 [2024-11-27 10:03:43.154060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.914 [2024-11-27 10:03:43.154073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.914 [2024-11-27 10:03:43.154080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.914 [2024-11-27 10:03:43.154086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.914 [2024-11-27 10:03:43.154101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.914 qpair failed and we were unable to recover it. 00:31:27.914 [2024-11-27 10:03:43.164085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.914 [2024-11-27 10:03:43.164138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.914 [2024-11-27 10:03:43.164151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.914 [2024-11-27 10:03:43.164165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.914 [2024-11-27 10:03:43.164172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.914 [2024-11-27 10:03:43.164187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.914 qpair failed and we were unable to recover it. 00:31:27.914 [2024-11-27 10:03:43.174083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.914 [2024-11-27 10:03:43.174134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.914 [2024-11-27 10:03:43.174147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.914 [2024-11-27 10:03:43.174154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.914 [2024-11-27 10:03:43.174164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.914 [2024-11-27 10:03:43.174179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.914 qpair failed and we were unable to recover it. 00:31:27.914 [2024-11-27 10:03:43.184141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.914 [2024-11-27 10:03:43.184197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.914 [2024-11-27 10:03:43.184210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.914 [2024-11-27 10:03:43.184217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.914 [2024-11-27 10:03:43.184223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.914 [2024-11-27 10:03:43.184237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.914 qpair failed and we were unable to recover it. 00:31:27.914 [2024-11-27 10:03:43.194086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.914 [2024-11-27 10:03:43.194133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.914 [2024-11-27 10:03:43.194146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.914 [2024-11-27 10:03:43.194153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.914 [2024-11-27 10:03:43.194163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.914 [2024-11-27 10:03:43.194178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.914 qpair failed and we were unable to recover it. 00:31:27.914 [2024-11-27 10:03:43.204198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.914 [2024-11-27 10:03:43.204253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.914 [2024-11-27 10:03:43.204265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.914 [2024-11-27 10:03:43.204272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.914 [2024-11-27 10:03:43.204278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.914 [2024-11-27 10:03:43.204296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.914 qpair failed and we were unable to recover it. 00:31:27.914 [2024-11-27 10:03:43.214221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.914 [2024-11-27 10:03:43.214288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.914 [2024-11-27 10:03:43.214301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.914 [2024-11-27 10:03:43.214307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.914 [2024-11-27 10:03:43.214314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.914 [2024-11-27 10:03:43.214328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.914 qpair failed and we were unable to recover it. 00:31:27.914 [2024-11-27 10:03:43.224265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.914 [2024-11-27 10:03:43.224315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.914 [2024-11-27 10:03:43.224328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.914 [2024-11-27 10:03:43.224335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.914 [2024-11-27 10:03:43.224341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.914 [2024-11-27 10:03:43.224356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.914 qpair failed and we were unable to recover it. 00:31:27.914 [2024-11-27 10:03:43.234111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.914 [2024-11-27 10:03:43.234156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.914 [2024-11-27 10:03:43.234172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.914 [2024-11-27 10:03:43.234179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.914 [2024-11-27 10:03:43.234185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.914 [2024-11-27 10:03:43.234199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.914 qpair failed and we were unable to recover it. 00:31:27.914 [2024-11-27 10:03:43.244307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.914 [2024-11-27 10:03:43.244358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.915 [2024-11-27 10:03:43.244370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.915 [2024-11-27 10:03:43.244377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.915 [2024-11-27 10:03:43.244384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.915 [2024-11-27 10:03:43.244398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.915 qpair failed and we were unable to recover it. 00:31:27.915 [2024-11-27 10:03:43.254327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.915 [2024-11-27 10:03:43.254380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.915 [2024-11-27 10:03:43.254394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.915 [2024-11-27 10:03:43.254401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.915 [2024-11-27 10:03:43.254407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.915 [2024-11-27 10:03:43.254421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.915 qpair failed and we were unable to recover it. 00:31:27.915 [2024-11-27 10:03:43.264362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.915 [2024-11-27 10:03:43.264415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.915 [2024-11-27 10:03:43.264428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.915 [2024-11-27 10:03:43.264435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.915 [2024-11-27 10:03:43.264441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.915 [2024-11-27 10:03:43.264455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.915 qpair failed and we were unable to recover it. 00:31:27.915 [2024-11-27 10:03:43.274332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.915 [2024-11-27 10:03:43.274379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.915 [2024-11-27 10:03:43.274392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.915 [2024-11-27 10:03:43.274399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.915 [2024-11-27 10:03:43.274405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.915 [2024-11-27 10:03:43.274419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.915 qpair failed and we were unable to recover it. 00:31:27.915 [2024-11-27 10:03:43.284401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.915 [2024-11-27 10:03:43.284449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.915 [2024-11-27 10:03:43.284461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.915 [2024-11-27 10:03:43.284468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.915 [2024-11-27 10:03:43.284475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.915 [2024-11-27 10:03:43.284488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.915 qpair failed and we were unable to recover it. 00:31:27.915 [2024-11-27 10:03:43.294435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.915 [2024-11-27 10:03:43.294488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.915 [2024-11-27 10:03:43.294504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.915 [2024-11-27 10:03:43.294511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.915 [2024-11-27 10:03:43.294517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.915 [2024-11-27 10:03:43.294531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.915 qpair failed and we were unable to recover it. 00:31:27.915 [2024-11-27 10:03:43.304465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.915 [2024-11-27 10:03:43.304522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.915 [2024-11-27 10:03:43.304535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.915 [2024-11-27 10:03:43.304542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.915 [2024-11-27 10:03:43.304548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.915 [2024-11-27 10:03:43.304562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.915 qpair failed and we were unable to recover it. 00:31:27.915 [2024-11-27 10:03:43.314327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.915 [2024-11-27 10:03:43.314373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.915 [2024-11-27 10:03:43.314387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.915 [2024-11-27 10:03:43.314394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.915 [2024-11-27 10:03:43.314401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.915 [2024-11-27 10:03:43.314415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.915 qpair failed and we were unable to recover it. 00:31:27.915 [2024-11-27 10:03:43.324473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.915 [2024-11-27 10:03:43.324525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.915 [2024-11-27 10:03:43.324539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.915 [2024-11-27 10:03:43.324547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.915 [2024-11-27 10:03:43.324554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.915 [2024-11-27 10:03:43.324569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.915 qpair failed and we were unable to recover it. 00:31:27.915 [2024-11-27 10:03:43.334425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.915 [2024-11-27 10:03:43.334483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.915 [2024-11-27 10:03:43.334495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.915 [2024-11-27 10:03:43.334502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.915 [2024-11-27 10:03:43.334512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.915 [2024-11-27 10:03:43.334526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.915 qpair failed and we were unable to recover it. 00:31:27.915 [2024-11-27 10:03:43.344567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.915 [2024-11-27 10:03:43.344623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.915 [2024-11-27 10:03:43.344635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.915 [2024-11-27 10:03:43.344642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.915 [2024-11-27 10:03:43.344649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.915 [2024-11-27 10:03:43.344663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.915 qpair failed and we were unable to recover it. 00:31:27.915 [2024-11-27 10:03:43.354539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.915 [2024-11-27 10:03:43.354585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.915 [2024-11-27 10:03:43.354597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.915 [2024-11-27 10:03:43.354605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.915 [2024-11-27 10:03:43.354611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.915 [2024-11-27 10:03:43.354625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.915 qpair failed and we were unable to recover it. 00:31:27.915 [2024-11-27 10:03:43.364615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.915 [2024-11-27 10:03:43.364704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.915 [2024-11-27 10:03:43.364717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.915 [2024-11-27 10:03:43.364724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.915 [2024-11-27 10:03:43.364730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.915 [2024-11-27 10:03:43.364745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.915 qpair failed and we were unable to recover it. 00:31:27.915 [2024-11-27 10:03:43.374648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.915 [2024-11-27 10:03:43.374704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.916 [2024-11-27 10:03:43.374717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.916 [2024-11-27 10:03:43.374724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.916 [2024-11-27 10:03:43.374730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:27.916 [2024-11-27 10:03:43.374744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.916 qpair failed and we were unable to recover it. 00:31:28.178 [2024-11-27 10:03:43.384686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.178 [2024-11-27 10:03:43.384747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.178 [2024-11-27 10:03:43.384765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.178 [2024-11-27 10:03:43.384774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.178 [2024-11-27 10:03:43.384780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.178 [2024-11-27 10:03:43.384796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.178 qpair failed and we were unable to recover it. 00:31:28.178 [2024-11-27 10:03:43.394655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.178 [2024-11-27 10:03:43.394698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.178 [2024-11-27 10:03:43.394711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.178 [2024-11-27 10:03:43.394718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.178 [2024-11-27 10:03:43.394725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.178 [2024-11-27 10:03:43.394739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.178 qpair failed and we were unable to recover it. 00:31:28.178 [2024-11-27 10:03:43.404719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.178 [2024-11-27 10:03:43.404775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.178 [2024-11-27 10:03:43.404788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.178 [2024-11-27 10:03:43.404795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.178 [2024-11-27 10:03:43.404801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.178 [2024-11-27 10:03:43.404816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.178 qpair failed and we were unable to recover it. 00:31:28.178 [2024-11-27 10:03:43.414764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.178 [2024-11-27 10:03:43.414816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.178 [2024-11-27 10:03:43.414829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.178 [2024-11-27 10:03:43.414836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.178 [2024-11-27 10:03:43.414842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.179 [2024-11-27 10:03:43.414856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.179 qpair failed and we were unable to recover it. 00:31:28.179 [2024-11-27 10:03:43.424784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.179 [2024-11-27 10:03:43.424838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.179 [2024-11-27 10:03:43.424856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.179 [2024-11-27 10:03:43.424864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.179 [2024-11-27 10:03:43.424871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.179 [2024-11-27 10:03:43.424887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.179 qpair failed and we were unable to recover it. 00:31:28.179 [2024-11-27 10:03:43.434764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.179 [2024-11-27 10:03:43.434818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.179 [2024-11-27 10:03:43.434843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.179 [2024-11-27 10:03:43.434852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.179 [2024-11-27 10:03:43.434858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.179 [2024-11-27 10:03:43.434878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.179 qpair failed and we were unable to recover it. 00:31:28.179 [2024-11-27 10:03:43.444825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.179 [2024-11-27 10:03:43.444876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.179 [2024-11-27 10:03:43.444891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.179 [2024-11-27 10:03:43.444899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.179 [2024-11-27 10:03:43.444905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.179 [2024-11-27 10:03:43.444921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.179 qpair failed and we were unable to recover it. 00:31:28.179 [2024-11-27 10:03:43.454829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.179 [2024-11-27 10:03:43.454893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.179 [2024-11-27 10:03:43.454917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.179 [2024-11-27 10:03:43.454926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.179 [2024-11-27 10:03:43.454933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.179 [2024-11-27 10:03:43.454952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.179 qpair failed and we were unable to recover it. 00:31:28.179 [2024-11-27 10:03:43.464779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.179 [2024-11-27 10:03:43.464838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.179 [2024-11-27 10:03:43.464854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.179 [2024-11-27 10:03:43.464861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.179 [2024-11-27 10:03:43.464873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.179 [2024-11-27 10:03:43.464890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.179 qpair failed and we were unable to recover it. 00:31:28.179 [2024-11-27 10:03:43.474878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.179 [2024-11-27 10:03:43.474927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.179 [2024-11-27 10:03:43.474944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.179 [2024-11-27 10:03:43.474951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.179 [2024-11-27 10:03:43.474958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.179 [2024-11-27 10:03:43.474973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.179 qpair failed and we were unable to recover it. 00:31:28.179 [2024-11-27 10:03:43.484997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.179 [2024-11-27 10:03:43.485059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.179 [2024-11-27 10:03:43.485076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.179 [2024-11-27 10:03:43.485084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.179 [2024-11-27 10:03:43.485091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.179 [2024-11-27 10:03:43.485107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.179 qpair failed and we were unable to recover it. 00:31:28.179 [2024-11-27 10:03:43.494987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.179 [2024-11-27 10:03:43.495041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.179 [2024-11-27 10:03:43.495055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.179 [2024-11-27 10:03:43.495062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.179 [2024-11-27 10:03:43.495068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.179 [2024-11-27 10:03:43.495083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.179 qpair failed and we were unable to recover it. 00:31:28.179 [2024-11-27 10:03:43.504889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.179 [2024-11-27 10:03:43.504946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.179 [2024-11-27 10:03:43.504960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.179 [2024-11-27 10:03:43.504967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.179 [2024-11-27 10:03:43.504973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.179 [2024-11-27 10:03:43.504987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.179 qpair failed and we were unable to recover it. 00:31:28.179 [2024-11-27 10:03:43.514992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.179 [2024-11-27 10:03:43.515038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.179 [2024-11-27 10:03:43.515051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.179 [2024-11-27 10:03:43.515058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.179 [2024-11-27 10:03:43.515065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.179 [2024-11-27 10:03:43.515078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.179 qpair failed and we were unable to recover it. 00:31:28.179 [2024-11-27 10:03:43.525114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.179 [2024-11-27 10:03:43.525188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.179 [2024-11-27 10:03:43.525201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.180 [2024-11-27 10:03:43.525208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.180 [2024-11-27 10:03:43.525215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.180 [2024-11-27 10:03:43.525229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.180 qpair failed and we were unable to recover it. 00:31:28.180 [2024-11-27 10:03:43.535093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.180 [2024-11-27 10:03:43.535150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.180 [2024-11-27 10:03:43.535168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.180 [2024-11-27 10:03:43.535176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.180 [2024-11-27 10:03:43.535182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.180 [2024-11-27 10:03:43.535196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.180 qpair failed and we were unable to recover it. 00:31:28.180 [2024-11-27 10:03:43.545096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.180 [2024-11-27 10:03:43.545154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.180 [2024-11-27 10:03:43.545170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.180 [2024-11-27 10:03:43.545177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.180 [2024-11-27 10:03:43.545184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.180 [2024-11-27 10:03:43.545198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.180 qpair failed and we were unable to recover it. 00:31:28.180 [2024-11-27 10:03:43.555114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.180 [2024-11-27 10:03:43.555171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.180 [2024-11-27 10:03:43.555184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.180 [2024-11-27 10:03:43.555191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.180 [2024-11-27 10:03:43.555197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.180 [2024-11-27 10:03:43.555212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.180 qpair failed and we were unable to recover it. 00:31:28.180 [2024-11-27 10:03:43.565164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.180 [2024-11-27 10:03:43.565214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.180 [2024-11-27 10:03:43.565227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.180 [2024-11-27 10:03:43.565234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.180 [2024-11-27 10:03:43.565240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.180 [2024-11-27 10:03:43.565254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.180 qpair failed and we were unable to recover it. 00:31:28.180 [2024-11-27 10:03:43.575206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.180 [2024-11-27 10:03:43.575289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.180 [2024-11-27 10:03:43.575303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.180 [2024-11-27 10:03:43.575311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.180 [2024-11-27 10:03:43.575318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.180 [2024-11-27 10:03:43.575332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.180 qpair failed and we were unable to recover it. 00:31:28.180 [2024-11-27 10:03:43.585235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.180 [2024-11-27 10:03:43.585288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.180 [2024-11-27 10:03:43.585301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.180 [2024-11-27 10:03:43.585308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.180 [2024-11-27 10:03:43.585314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.180 [2024-11-27 10:03:43.585329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.180 qpair failed and we were unable to recover it. 00:31:28.180 [2024-11-27 10:03:43.595199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.180 [2024-11-27 10:03:43.595255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.180 [2024-11-27 10:03:43.595268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.180 [2024-11-27 10:03:43.595278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.180 [2024-11-27 10:03:43.595285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.180 [2024-11-27 10:03:43.595299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.180 qpair failed and we were unable to recover it. 00:31:28.180 [2024-11-27 10:03:43.605253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.180 [2024-11-27 10:03:43.605313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.180 [2024-11-27 10:03:43.605327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.180 [2024-11-27 10:03:43.605335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.180 [2024-11-27 10:03:43.605341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.180 [2024-11-27 10:03:43.605359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.180 qpair failed and we were unable to recover it. 00:31:28.180 [2024-11-27 10:03:43.615306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.180 [2024-11-27 10:03:43.615363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.180 [2024-11-27 10:03:43.615376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.180 [2024-11-27 10:03:43.615383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.180 [2024-11-27 10:03:43.615390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.180 [2024-11-27 10:03:43.615404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.180 qpair failed and we were unable to recover it. 00:31:28.180 [2024-11-27 10:03:43.625322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.180 [2024-11-27 10:03:43.625379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.180 [2024-11-27 10:03:43.625392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.180 [2024-11-27 10:03:43.625399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.180 [2024-11-27 10:03:43.625405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.180 [2024-11-27 10:03:43.625420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.180 qpair failed and we were unable to recover it. 00:31:28.180 [2024-11-27 10:03:43.635366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.180 [2024-11-27 10:03:43.635437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.180 [2024-11-27 10:03:43.635450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.180 [2024-11-27 10:03:43.635457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.181 [2024-11-27 10:03:43.635463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.181 [2024-11-27 10:03:43.635481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.181 qpair failed and we were unable to recover it. 00:31:28.441 [2024-11-27 10:03:43.645345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.441 [2024-11-27 10:03:43.645396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.441 [2024-11-27 10:03:43.645410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.441 [2024-11-27 10:03:43.645417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.441 [2024-11-27 10:03:43.645423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.441 [2024-11-27 10:03:43.645438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.441 qpair failed and we were unable to recover it. 00:31:28.441 [2024-11-27 10:03:43.655424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.441 [2024-11-27 10:03:43.655477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.441 [2024-11-27 10:03:43.655490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.441 [2024-11-27 10:03:43.655497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.441 [2024-11-27 10:03:43.655504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.441 [2024-11-27 10:03:43.655517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.441 qpair failed and we were unable to recover it. 00:31:28.441 [2024-11-27 10:03:43.665452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.441 [2024-11-27 10:03:43.665502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.441 [2024-11-27 10:03:43.665515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.441 [2024-11-27 10:03:43.665522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.441 [2024-11-27 10:03:43.665528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.441 [2024-11-27 10:03:43.665542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.441 qpair failed and we were unable to recover it. 00:31:28.441 [2024-11-27 10:03:43.675418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.675463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.675476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.675482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.675489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.675502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.685466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.685519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.685532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.685539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.685545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.685559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.695517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.695573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.695586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.695593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.695600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.695614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.705533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.705586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.705599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.705606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.705612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.705626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.715523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.715572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.715585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.715592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.715598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.715612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.725583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.725645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.725658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.725669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.725675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.725689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.735635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.735691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.735704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.735711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.735717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.735731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.745669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.745725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.745739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.745746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.745753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.745767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.755642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.755706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.755719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.755727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.755733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.755747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.765705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.765754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.765768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.765775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.765782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.765799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.775620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.775673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.775685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.775692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.775699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.775713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.785781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.785833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.785846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.785853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.785859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.785873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.795750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.795801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.795814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.795822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.795828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.795842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.805778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.805832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.805846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.805853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.805860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.805874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.815720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.815772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.815785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.815792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.815799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.815812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.825851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.825903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.825916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.825923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.825930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.825944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.835851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.835908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.835921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.835928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.835934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.835949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.845900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.845953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.845966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.845973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.845979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.845993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.855951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.856015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.856043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.856052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.856059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.856079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.865981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.866035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.866050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.866058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.866064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.866080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.875972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.876019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.876033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.876040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.876046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.876061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.885962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.886013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.886026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.886034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.886040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.886054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.896132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.896197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.896211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.896218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.896227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.896242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.442 [2024-11-27 10:03:43.906117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.442 [2024-11-27 10:03:43.906176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.442 [2024-11-27 10:03:43.906190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.442 [2024-11-27 10:03:43.906197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.442 [2024-11-27 10:03:43.906203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.442 [2024-11-27 10:03:43.906218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.442 qpair failed and we were unable to recover it. 00:31:28.704 [2024-11-27 10:03:43.916110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.704 [2024-11-27 10:03:43.916208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.704 [2024-11-27 10:03:43.916221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.704 [2024-11-27 10:03:43.916228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.704 [2024-11-27 10:03:43.916235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.704 [2024-11-27 10:03:43.916249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.704 qpair failed and we were unable to recover it. 00:31:28.704 [2024-11-27 10:03:43.925993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.704 [2024-11-27 10:03:43.926040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.704 [2024-11-27 10:03:43.926054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.704 [2024-11-27 10:03:43.926061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.704 [2024-11-27 10:03:43.926067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.704 [2024-11-27 10:03:43.926082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.704 qpair failed and we were unable to recover it. 00:31:28.704 [2024-11-27 10:03:43.936193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.704 [2024-11-27 10:03:43.936246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.704 [2024-11-27 10:03:43.936260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.704 [2024-11-27 10:03:43.936267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.704 [2024-11-27 10:03:43.936273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.704 [2024-11-27 10:03:43.936288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.704 qpair failed and we were unable to recover it. 00:31:28.704 [2024-11-27 10:03:43.946233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.704 [2024-11-27 10:03:43.946298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.704 [2024-11-27 10:03:43.946311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.704 [2024-11-27 10:03:43.946318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.704 [2024-11-27 10:03:43.946325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.704 [2024-11-27 10:03:43.946339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.704 qpair failed and we were unable to recover it. 00:31:28.704 [2024-11-27 10:03:43.956208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.704 [2024-11-27 10:03:43.956257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.704 [2024-11-27 10:03:43.956270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.704 [2024-11-27 10:03:43.956277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.704 [2024-11-27 10:03:43.956283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.704 [2024-11-27 10:03:43.956297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.704 qpair failed and we were unable to recover it. 00:31:28.704 [2024-11-27 10:03:43.966223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.704 [2024-11-27 10:03:43.966276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.704 [2024-11-27 10:03:43.966290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.704 [2024-11-27 10:03:43.966297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.704 [2024-11-27 10:03:43.966303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.704 [2024-11-27 10:03:43.966317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.704 qpair failed and we were unable to recover it. 00:31:28.704 [2024-11-27 10:03:43.976274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.704 [2024-11-27 10:03:43.976331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.704 [2024-11-27 10:03:43.976344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.704 [2024-11-27 10:03:43.976351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.704 [2024-11-27 10:03:43.976357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.704 [2024-11-27 10:03:43.976372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.704 qpair failed and we were unable to recover it. 00:31:28.704 [2024-11-27 10:03:43.986303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.704 [2024-11-27 10:03:43.986358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.704 [2024-11-27 10:03:43.986374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.704 [2024-11-27 10:03:43.986381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.704 [2024-11-27 10:03:43.986387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.704 [2024-11-27 10:03:43.986402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.704 qpair failed and we were unable to recover it. 00:31:28.704 [2024-11-27 10:03:43.996330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.704 [2024-11-27 10:03:43.996401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.704 [2024-11-27 10:03:43.996414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.704 [2024-11-27 10:03:43.996421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.704 [2024-11-27 10:03:43.996428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.704 [2024-11-27 10:03:43.996442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.704 qpair failed and we were unable to recover it. 00:31:28.704 [2024-11-27 10:03:44.006325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.704 [2024-11-27 10:03:44.006368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.006381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.705 [2024-11-27 10:03:44.006388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.705 [2024-11-27 10:03:44.006395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.705 [2024-11-27 10:03:44.006409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.705 qpair failed and we were unable to recover it. 00:31:28.705 [2024-11-27 10:03:44.016415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.705 [2024-11-27 10:03:44.016468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.016481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.705 [2024-11-27 10:03:44.016488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.705 [2024-11-27 10:03:44.016495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.705 [2024-11-27 10:03:44.016509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.705 qpair failed and we were unable to recover it. 00:31:28.705 [2024-11-27 10:03:44.026454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.705 [2024-11-27 10:03:44.026511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.026524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.705 [2024-11-27 10:03:44.026531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.705 [2024-11-27 10:03:44.026541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.705 [2024-11-27 10:03:44.026555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.705 qpair failed and we were unable to recover it. 00:31:28.705 [2024-11-27 10:03:44.036430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.705 [2024-11-27 10:03:44.036494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.036507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.705 [2024-11-27 10:03:44.036514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.705 [2024-11-27 10:03:44.036520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.705 [2024-11-27 10:03:44.036534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.705 qpair failed and we were unable to recover it. 00:31:28.705 [2024-11-27 10:03:44.046466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.705 [2024-11-27 10:03:44.046513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.046526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.705 [2024-11-27 10:03:44.046533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.705 [2024-11-27 10:03:44.046540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.705 [2024-11-27 10:03:44.046554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.705 qpair failed and we were unable to recover it. 00:31:28.705 [2024-11-27 10:03:44.056525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.705 [2024-11-27 10:03:44.056580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.056594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.705 [2024-11-27 10:03:44.056601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.705 [2024-11-27 10:03:44.056607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.705 [2024-11-27 10:03:44.056625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.705 qpair failed and we were unable to recover it. 00:31:28.705 [2024-11-27 10:03:44.066559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.705 [2024-11-27 10:03:44.066656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.066670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.705 [2024-11-27 10:03:44.066677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.705 [2024-11-27 10:03:44.066684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.705 [2024-11-27 10:03:44.066698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.705 qpair failed and we were unable to recover it. 00:31:28.705 [2024-11-27 10:03:44.076532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.705 [2024-11-27 10:03:44.076590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.076603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.705 [2024-11-27 10:03:44.076611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.705 [2024-11-27 10:03:44.076618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.705 [2024-11-27 10:03:44.076633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.705 qpair failed and we were unable to recover it. 00:31:28.705 [2024-11-27 10:03:44.086530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.705 [2024-11-27 10:03:44.086577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.086591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.705 [2024-11-27 10:03:44.086598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.705 [2024-11-27 10:03:44.086604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.705 [2024-11-27 10:03:44.086618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.705 qpair failed and we were unable to recover it. 00:31:28.705 [2024-11-27 10:03:44.096611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.705 [2024-11-27 10:03:44.096669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.096682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.705 [2024-11-27 10:03:44.096688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.705 [2024-11-27 10:03:44.096695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.705 [2024-11-27 10:03:44.096709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.705 qpair failed and we were unable to recover it. 00:31:28.705 [2024-11-27 10:03:44.106654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.705 [2024-11-27 10:03:44.106712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.106725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.705 [2024-11-27 10:03:44.106732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.705 [2024-11-27 10:03:44.106738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.705 [2024-11-27 10:03:44.106752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.705 qpair failed and we were unable to recover it. 00:31:28.705 [2024-11-27 10:03:44.116660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.705 [2024-11-27 10:03:44.116711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.116725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.705 [2024-11-27 10:03:44.116732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.705 [2024-11-27 10:03:44.116738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.705 [2024-11-27 10:03:44.116752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.705 qpair failed and we were unable to recover it. 00:31:28.705 [2024-11-27 10:03:44.126716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.705 [2024-11-27 10:03:44.126768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.126781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.705 [2024-11-27 10:03:44.126789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.705 [2024-11-27 10:03:44.126795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.705 [2024-11-27 10:03:44.126809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.705 qpair failed and we were unable to recover it. 00:31:28.705 [2024-11-27 10:03:44.136755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.705 [2024-11-27 10:03:44.136823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.705 [2024-11-27 10:03:44.136836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.706 [2024-11-27 10:03:44.136843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.706 [2024-11-27 10:03:44.136850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.706 [2024-11-27 10:03:44.136863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.706 qpair failed and we were unable to recover it. 00:31:28.706 [2024-11-27 10:03:44.146657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.706 [2024-11-27 10:03:44.146716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.706 [2024-11-27 10:03:44.146729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.706 [2024-11-27 10:03:44.146736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.706 [2024-11-27 10:03:44.146742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.706 [2024-11-27 10:03:44.146756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.706 qpair failed and we were unable to recover it. 00:31:28.706 [2024-11-27 10:03:44.156757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.706 [2024-11-27 10:03:44.156804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.706 [2024-11-27 10:03:44.156817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.706 [2024-11-27 10:03:44.156831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.706 [2024-11-27 10:03:44.156837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.706 [2024-11-27 10:03:44.156851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.706 qpair failed and we were unable to recover it. 00:31:28.706 [2024-11-27 10:03:44.166792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.706 [2024-11-27 10:03:44.166840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.706 [2024-11-27 10:03:44.166853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.706 [2024-11-27 10:03:44.166860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.706 [2024-11-27 10:03:44.166866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.706 [2024-11-27 10:03:44.166880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.706 qpair failed and we were unable to recover it. 00:31:28.968 [2024-11-27 10:03:44.176868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.968 [2024-11-27 10:03:44.176923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.968 [2024-11-27 10:03:44.176937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.968 [2024-11-27 10:03:44.176944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.968 [2024-11-27 10:03:44.176950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.968 [2024-11-27 10:03:44.176965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.968 qpair failed and we were unable to recover it. 00:31:28.968 [2024-11-27 10:03:44.186898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.968 [2024-11-27 10:03:44.186958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.968 [2024-11-27 10:03:44.186983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.968 [2024-11-27 10:03:44.186991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.968 [2024-11-27 10:03:44.186998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.968 [2024-11-27 10:03:44.187018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.968 qpair failed and we were unable to recover it. 00:31:28.968 [2024-11-27 10:03:44.196873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.968 [2024-11-27 10:03:44.196927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.968 [2024-11-27 10:03:44.196950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.968 [2024-11-27 10:03:44.196959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.968 [2024-11-27 10:03:44.196966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.968 [2024-11-27 10:03:44.196990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.968 qpair failed and we were unable to recover it. 00:31:28.968 [2024-11-27 10:03:44.206896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.968 [2024-11-27 10:03:44.206947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.968 [2024-11-27 10:03:44.206971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.968 [2024-11-27 10:03:44.206980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.968 [2024-11-27 10:03:44.206987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.968 [2024-11-27 10:03:44.207006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.968 qpair failed and we were unable to recover it. 00:31:28.968 [2024-11-27 10:03:44.216990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.968 [2024-11-27 10:03:44.217043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.968 [2024-11-27 10:03:44.217058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.968 [2024-11-27 10:03:44.217065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.968 [2024-11-27 10:03:44.217072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.968 [2024-11-27 10:03:44.217088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.968 qpair failed and we were unable to recover it. 00:31:28.968 [2024-11-27 10:03:44.227000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.968 [2024-11-27 10:03:44.227053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.968 [2024-11-27 10:03:44.227068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.968 [2024-11-27 10:03:44.227075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.968 [2024-11-27 10:03:44.227081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.968 [2024-11-27 10:03:44.227096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.968 qpair failed and we were unable to recover it. 00:31:28.968 [2024-11-27 10:03:44.236987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.968 [2024-11-27 10:03:44.237033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.968 [2024-11-27 10:03:44.237047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.968 [2024-11-27 10:03:44.237054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.968 [2024-11-27 10:03:44.237061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.968 [2024-11-27 10:03:44.237075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.968 qpair failed and we were unable to recover it. 00:31:28.968 [2024-11-27 10:03:44.247004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.969 [2024-11-27 10:03:44.247054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.969 [2024-11-27 10:03:44.247067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.969 [2024-11-27 10:03:44.247074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.969 [2024-11-27 10:03:44.247081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.969 [2024-11-27 10:03:44.247095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.969 qpair failed and we were unable to recover it. 00:31:28.969 [2024-11-27 10:03:44.257080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.969 [2024-11-27 10:03:44.257139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.969 [2024-11-27 10:03:44.257151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.969 [2024-11-27 10:03:44.257163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.969 [2024-11-27 10:03:44.257170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.969 [2024-11-27 10:03:44.257185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.969 qpair failed and we were unable to recover it. 00:31:28.969 [2024-11-27 10:03:44.267114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.969 [2024-11-27 10:03:44.267169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.969 [2024-11-27 10:03:44.267183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.969 [2024-11-27 10:03:44.267190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.969 [2024-11-27 10:03:44.267196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.969 [2024-11-27 10:03:44.267210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.969 qpair failed and we were unable to recover it. 00:31:28.969 [2024-11-27 10:03:44.277087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.969 [2024-11-27 10:03:44.277133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.969 [2024-11-27 10:03:44.277146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.969 [2024-11-27 10:03:44.277153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.969 [2024-11-27 10:03:44.277163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.969 [2024-11-27 10:03:44.277178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.969 qpair failed and we were unable to recover it. 00:31:28.969 [2024-11-27 10:03:44.287111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.969 [2024-11-27 10:03:44.287154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.969 [2024-11-27 10:03:44.287170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.969 [2024-11-27 10:03:44.287181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.969 [2024-11-27 10:03:44.287188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.969 [2024-11-27 10:03:44.287202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.969 qpair failed and we were unable to recover it. 00:31:28.969 [2024-11-27 10:03:44.297164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.969 [2024-11-27 10:03:44.297218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.969 [2024-11-27 10:03:44.297231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.969 [2024-11-27 10:03:44.297238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.969 [2024-11-27 10:03:44.297244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.969 [2024-11-27 10:03:44.297258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.969 qpair failed and we were unable to recover it. 00:31:28.969 [2024-11-27 10:03:44.307238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.969 [2024-11-27 10:03:44.307295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.969 [2024-11-27 10:03:44.307309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.969 [2024-11-27 10:03:44.307316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.969 [2024-11-27 10:03:44.307322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.969 [2024-11-27 10:03:44.307336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.969 qpair failed and we were unable to recover it. 00:31:28.969 [2024-11-27 10:03:44.317206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.969 [2024-11-27 10:03:44.317252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.969 [2024-11-27 10:03:44.317265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.969 [2024-11-27 10:03:44.317272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.969 [2024-11-27 10:03:44.317279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.969 [2024-11-27 10:03:44.317292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.969 qpair failed and we were unable to recover it. 00:31:28.969 [2024-11-27 10:03:44.327218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.969 [2024-11-27 10:03:44.327265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.969 [2024-11-27 10:03:44.327278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.969 [2024-11-27 10:03:44.327285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.969 [2024-11-27 10:03:44.327292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.969 [2024-11-27 10:03:44.327311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.969 qpair failed and we were unable to recover it. 00:31:28.969 [2024-11-27 10:03:44.337334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.969 [2024-11-27 10:03:44.337388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.969 [2024-11-27 10:03:44.337401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.969 [2024-11-27 10:03:44.337408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.969 [2024-11-27 10:03:44.337414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.969 [2024-11-27 10:03:44.337428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.969 qpair failed and we were unable to recover it. 00:31:28.969 [2024-11-27 10:03:44.347303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.969 [2024-11-27 10:03:44.347358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.969 [2024-11-27 10:03:44.347370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.969 [2024-11-27 10:03:44.347377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.969 [2024-11-27 10:03:44.347383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.969 [2024-11-27 10:03:44.347397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.969 qpair failed and we were unable to recover it. 00:31:28.969 [2024-11-27 10:03:44.357267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.969 [2024-11-27 10:03:44.357318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.969 [2024-11-27 10:03:44.357330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.969 [2024-11-27 10:03:44.357337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.969 [2024-11-27 10:03:44.357344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.969 [2024-11-27 10:03:44.357358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.969 qpair failed and we were unable to recover it. 00:31:28.969 [2024-11-27 10:03:44.367322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.969 [2024-11-27 10:03:44.367376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.969 [2024-11-27 10:03:44.367388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.969 [2024-11-27 10:03:44.367395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.969 [2024-11-27 10:03:44.367402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.969 [2024-11-27 10:03:44.367416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.969 qpair failed and we were unable to recover it. 00:31:28.969 [2024-11-27 10:03:44.377421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.970 [2024-11-27 10:03:44.377478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.970 [2024-11-27 10:03:44.377491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.970 [2024-11-27 10:03:44.377498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.970 [2024-11-27 10:03:44.377504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.970 [2024-11-27 10:03:44.377518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.970 qpair failed and we were unable to recover it. 00:31:28.970 [2024-11-27 10:03:44.387442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.970 [2024-11-27 10:03:44.387538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.970 [2024-11-27 10:03:44.387552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.970 [2024-11-27 10:03:44.387559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.970 [2024-11-27 10:03:44.387565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.970 [2024-11-27 10:03:44.387580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.970 qpair failed and we were unable to recover it. 00:31:28.970 [2024-11-27 10:03:44.397400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.970 [2024-11-27 10:03:44.397454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.970 [2024-11-27 10:03:44.397467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.970 [2024-11-27 10:03:44.397474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.970 [2024-11-27 10:03:44.397480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.970 [2024-11-27 10:03:44.397494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.970 qpair failed and we were unable to recover it. 00:31:28.970 [2024-11-27 10:03:44.407405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.970 [2024-11-27 10:03:44.407449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.970 [2024-11-27 10:03:44.407462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.970 [2024-11-27 10:03:44.407469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.970 [2024-11-27 10:03:44.407476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.970 [2024-11-27 10:03:44.407489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.970 qpair failed and we were unable to recover it. 00:31:28.970 [2024-11-27 10:03:44.417479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.970 [2024-11-27 10:03:44.417533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.970 [2024-11-27 10:03:44.417549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.970 [2024-11-27 10:03:44.417556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.970 [2024-11-27 10:03:44.417562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.970 [2024-11-27 10:03:44.417576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.970 qpair failed and we were unable to recover it. 00:31:28.970 [2024-11-27 10:03:44.427431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.970 [2024-11-27 10:03:44.427488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.970 [2024-11-27 10:03:44.427501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.970 [2024-11-27 10:03:44.427508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.970 [2024-11-27 10:03:44.427514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:28.970 [2024-11-27 10:03:44.427528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.970 qpair failed and we were unable to recover it. 00:31:29.232 [2024-11-27 10:03:44.437518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.232 [2024-11-27 10:03:44.437567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.232 [2024-11-27 10:03:44.437580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.232 [2024-11-27 10:03:44.437588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.233 [2024-11-27 10:03:44.437594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.233 [2024-11-27 10:03:44.437608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.233 qpair failed and we were unable to recover it. 00:31:29.233 [2024-11-27 10:03:44.447560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.233 [2024-11-27 10:03:44.447608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.233 [2024-11-27 10:03:44.447620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.233 [2024-11-27 10:03:44.447628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.233 [2024-11-27 10:03:44.447634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.233 [2024-11-27 10:03:44.447648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.233 qpair failed and we were unable to recover it. 00:31:29.233 [2024-11-27 10:03:44.457497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.233 [2024-11-27 10:03:44.457551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.233 [2024-11-27 10:03:44.457564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.233 [2024-11-27 10:03:44.457571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.233 [2024-11-27 10:03:44.457581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.233 [2024-11-27 10:03:44.457595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.233 qpair failed and we were unable to recover it. 00:31:29.233 [2024-11-27 10:03:44.467649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.233 [2024-11-27 10:03:44.467732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.233 [2024-11-27 10:03:44.467746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.233 [2024-11-27 10:03:44.467753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.233 [2024-11-27 10:03:44.467759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.233 [2024-11-27 10:03:44.467773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.233 qpair failed and we were unable to recover it. 00:31:29.233 [2024-11-27 10:03:44.477630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.233 [2024-11-27 10:03:44.477683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.233 [2024-11-27 10:03:44.477696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.233 [2024-11-27 10:03:44.477703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.233 [2024-11-27 10:03:44.477709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.233 [2024-11-27 10:03:44.477724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.233 qpair failed and we were unable to recover it. 00:31:29.233 [2024-11-27 10:03:44.487664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.233 [2024-11-27 10:03:44.487713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.233 [2024-11-27 10:03:44.487726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.233 [2024-11-27 10:03:44.487733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.233 [2024-11-27 10:03:44.487739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.233 [2024-11-27 10:03:44.487753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.233 qpair failed and we were unable to recover it. 00:31:29.233 [2024-11-27 10:03:44.497734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.233 [2024-11-27 10:03:44.497787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.233 [2024-11-27 10:03:44.497800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.233 [2024-11-27 10:03:44.497806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.233 [2024-11-27 10:03:44.497813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.233 [2024-11-27 10:03:44.497827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.233 qpair failed and we were unable to recover it. 00:31:29.233 [2024-11-27 10:03:44.507761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.233 [2024-11-27 10:03:44.507815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.233 [2024-11-27 10:03:44.507828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.233 [2024-11-27 10:03:44.507835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.233 [2024-11-27 10:03:44.507841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.233 [2024-11-27 10:03:44.507855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.233 qpair failed and we were unable to recover it. 00:31:29.233 [2024-11-27 10:03:44.517739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.233 [2024-11-27 10:03:44.517788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.233 [2024-11-27 10:03:44.517801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.233 [2024-11-27 10:03:44.517808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.233 [2024-11-27 10:03:44.517814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.233 [2024-11-27 10:03:44.517828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.233 qpair failed and we were unable to recover it. 00:31:29.233 [2024-11-27 10:03:44.527773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.233 [2024-11-27 10:03:44.527820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.233 [2024-11-27 10:03:44.527833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.233 [2024-11-27 10:03:44.527840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.233 [2024-11-27 10:03:44.527846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.234 [2024-11-27 10:03:44.527860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.234 qpair failed and we were unable to recover it. 00:31:29.234 [2024-11-27 10:03:44.537834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.234 [2024-11-27 10:03:44.537889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.234 [2024-11-27 10:03:44.537902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.234 [2024-11-27 10:03:44.537909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.234 [2024-11-27 10:03:44.537915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.234 [2024-11-27 10:03:44.537930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.234 qpair failed and we were unable to recover it. 00:31:29.234 [2024-11-27 10:03:44.547873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.234 [2024-11-27 10:03:44.547930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.234 [2024-11-27 10:03:44.547946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.234 [2024-11-27 10:03:44.547953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.234 [2024-11-27 10:03:44.547959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.234 [2024-11-27 10:03:44.547973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.234 qpair failed and we were unable to recover it. 00:31:29.234 [2024-11-27 10:03:44.557847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.234 [2024-11-27 10:03:44.557905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.234 [2024-11-27 10:03:44.557918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.234 [2024-11-27 10:03:44.557925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.234 [2024-11-27 10:03:44.557931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.234 [2024-11-27 10:03:44.557945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.234 qpair failed and we were unable to recover it. 00:31:29.234 [2024-11-27 10:03:44.567772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.234 [2024-11-27 10:03:44.567824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.234 [2024-11-27 10:03:44.567837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.234 [2024-11-27 10:03:44.567844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.234 [2024-11-27 10:03:44.567850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.234 [2024-11-27 10:03:44.567864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.234 qpair failed and we were unable to recover it. 00:31:29.234 [2024-11-27 10:03:44.577914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.234 [2024-11-27 10:03:44.577974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.234 [2024-11-27 10:03:44.577998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.234 [2024-11-27 10:03:44.578007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.234 [2024-11-27 10:03:44.578014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.234 [2024-11-27 10:03:44.578034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.234 qpair failed and we were unable to recover it. 00:31:29.234 [2024-11-27 10:03:44.587998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.234 [2024-11-27 10:03:44.588053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.234 [2024-11-27 10:03:44.588067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.234 [2024-11-27 10:03:44.588075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.234 [2024-11-27 10:03:44.588086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.234 [2024-11-27 10:03:44.588102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.234 qpair failed and we were unable to recover it. 00:31:29.234 [2024-11-27 10:03:44.597940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.234 [2024-11-27 10:03:44.597986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.234 [2024-11-27 10:03:44.598000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.234 [2024-11-27 10:03:44.598007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.234 [2024-11-27 10:03:44.598013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.234 [2024-11-27 10:03:44.598028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.234 qpair failed and we were unable to recover it. 00:31:29.234 [2024-11-27 10:03:44.607986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.234 [2024-11-27 10:03:44.608028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.234 [2024-11-27 10:03:44.608042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.234 [2024-11-27 10:03:44.608049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.234 [2024-11-27 10:03:44.608055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.234 [2024-11-27 10:03:44.608070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.234 qpair failed and we were unable to recover it. 00:31:29.234 [2024-11-27 10:03:44.618054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.234 [2024-11-27 10:03:44.618110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.234 [2024-11-27 10:03:44.618122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.234 [2024-11-27 10:03:44.618129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.234 [2024-11-27 10:03:44.618136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.234 [2024-11-27 10:03:44.618150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.234 qpair failed and we were unable to recover it. 00:31:29.234 [2024-11-27 10:03:44.628098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.235 [2024-11-27 10:03:44.628154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.235 [2024-11-27 10:03:44.628171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.235 [2024-11-27 10:03:44.628178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.235 [2024-11-27 10:03:44.628184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.235 [2024-11-27 10:03:44.628198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.235 qpair failed and we were unable to recover it. 00:31:29.235 [2024-11-27 10:03:44.638071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.235 [2024-11-27 10:03:44.638122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.235 [2024-11-27 10:03:44.638136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.235 [2024-11-27 10:03:44.638143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.235 [2024-11-27 10:03:44.638149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.235 [2024-11-27 10:03:44.638167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.235 qpair failed and we were unable to recover it. 00:31:29.235 [2024-11-27 10:03:44.648150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.235 [2024-11-27 10:03:44.648203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.235 [2024-11-27 10:03:44.648216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.235 [2024-11-27 10:03:44.648223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.235 [2024-11-27 10:03:44.648229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.235 [2024-11-27 10:03:44.648244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.235 qpair failed and we were unable to recover it. 00:31:29.235 [2024-11-27 10:03:44.658196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.235 [2024-11-27 10:03:44.658282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.235 [2024-11-27 10:03:44.658295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.235 [2024-11-27 10:03:44.658302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.235 [2024-11-27 10:03:44.658308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.235 [2024-11-27 10:03:44.658322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.235 qpair failed and we were unable to recover it. 00:31:29.235 [2024-11-27 10:03:44.668224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.235 [2024-11-27 10:03:44.668276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.235 [2024-11-27 10:03:44.668290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.235 [2024-11-27 10:03:44.668297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.235 [2024-11-27 10:03:44.668303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.235 [2024-11-27 10:03:44.668317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.235 qpair failed and we were unable to recover it. 00:31:29.235 [2024-11-27 10:03:44.678188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.235 [2024-11-27 10:03:44.678238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.235 [2024-11-27 10:03:44.678251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.235 [2024-11-27 10:03:44.678258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.235 [2024-11-27 10:03:44.678264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.235 [2024-11-27 10:03:44.678278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.235 qpair failed and we were unable to recover it. 00:31:29.235 [2024-11-27 10:03:44.688231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.235 [2024-11-27 10:03:44.688280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.235 [2024-11-27 10:03:44.688293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.235 [2024-11-27 10:03:44.688300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.235 [2024-11-27 10:03:44.688307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.235 [2024-11-27 10:03:44.688321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.235 qpair failed and we were unable to recover it. 00:31:29.497 [2024-11-27 10:03:44.698260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.497 [2024-11-27 10:03:44.698320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.497 [2024-11-27 10:03:44.698334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.497 [2024-11-27 10:03:44.698341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.497 [2024-11-27 10:03:44.698347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.497 [2024-11-27 10:03:44.698361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.497 qpair failed and we were unable to recover it. 00:31:29.497 [2024-11-27 10:03:44.708333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.497 [2024-11-27 10:03:44.708388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.497 [2024-11-27 10:03:44.708400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.497 [2024-11-27 10:03:44.708408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.497 [2024-11-27 10:03:44.708414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.497 [2024-11-27 10:03:44.708428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.497 qpair failed and we were unable to recover it. 00:31:29.497 [2024-11-27 10:03:44.718303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.497 [2024-11-27 10:03:44.718355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.497 [2024-11-27 10:03:44.718368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.497 [2024-11-27 10:03:44.718379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.497 [2024-11-27 10:03:44.718385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.497 [2024-11-27 10:03:44.718399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.497 qpair failed and we were unable to recover it. 00:31:29.497 [2024-11-27 10:03:44.728334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.497 [2024-11-27 10:03:44.728379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.497 [2024-11-27 10:03:44.728393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.497 [2024-11-27 10:03:44.728400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.497 [2024-11-27 10:03:44.728406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.497 [2024-11-27 10:03:44.728419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.497 qpair failed and we were unable to recover it. 00:31:29.497 [2024-11-27 10:03:44.738376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.497 [2024-11-27 10:03:44.738467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.497 [2024-11-27 10:03:44.738480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.497 [2024-11-27 10:03:44.738487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.497 [2024-11-27 10:03:44.738493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.497 [2024-11-27 10:03:44.738507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.497 qpair failed and we were unable to recover it. 00:31:29.497 [2024-11-27 10:03:44.748317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.497 [2024-11-27 10:03:44.748373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.497 [2024-11-27 10:03:44.748387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.497 [2024-11-27 10:03:44.748395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.497 [2024-11-27 10:03:44.748401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.497 [2024-11-27 10:03:44.748415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.497 qpair failed and we were unable to recover it. 00:31:29.497 [2024-11-27 10:03:44.758395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.497 [2024-11-27 10:03:44.758441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.497 [2024-11-27 10:03:44.758454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.497 [2024-11-27 10:03:44.758461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.497 [2024-11-27 10:03:44.758468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.497 [2024-11-27 10:03:44.758485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.497 qpair failed and we were unable to recover it. 00:31:29.497 [2024-11-27 10:03:44.768439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.497 [2024-11-27 10:03:44.768491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.497 [2024-11-27 10:03:44.768504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.497 [2024-11-27 10:03:44.768511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.497 [2024-11-27 10:03:44.768517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.497 [2024-11-27 10:03:44.768531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.497 qpair failed and we were unable to recover it. 00:31:29.497 [2024-11-27 10:03:44.778517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.497 [2024-11-27 10:03:44.778572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.497 [2024-11-27 10:03:44.778584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.497 [2024-11-27 10:03:44.778591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.497 [2024-11-27 10:03:44.778597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.497 [2024-11-27 10:03:44.778611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.497 qpair failed and we were unable to recover it. 00:31:29.497 [2024-11-27 10:03:44.788526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.497 [2024-11-27 10:03:44.788584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.497 [2024-11-27 10:03:44.788597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.497 [2024-11-27 10:03:44.788604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.497 [2024-11-27 10:03:44.788610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.498 [2024-11-27 10:03:44.788624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.498 qpair failed and we were unable to recover it. 00:31:29.498 [2024-11-27 10:03:44.798405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.498 [2024-11-27 10:03:44.798456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.498 [2024-11-27 10:03:44.798469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.498 [2024-11-27 10:03:44.798476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.498 [2024-11-27 10:03:44.798482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.498 [2024-11-27 10:03:44.798496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.498 qpair failed and we were unable to recover it. 00:31:29.498 [2024-11-27 10:03:44.808528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.498 [2024-11-27 10:03:44.808582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.498 [2024-11-27 10:03:44.808596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.498 [2024-11-27 10:03:44.808603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.498 [2024-11-27 10:03:44.808609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.498 [2024-11-27 10:03:44.808623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.498 qpair failed and we were unable to recover it. 00:31:29.498 [2024-11-27 10:03:44.818623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.498 [2024-11-27 10:03:44.818676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.498 [2024-11-27 10:03:44.818689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.498 [2024-11-27 10:03:44.818696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.498 [2024-11-27 10:03:44.818703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.498 [2024-11-27 10:03:44.818716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.498 qpair failed and we were unable to recover it. 00:31:29.498 [2024-11-27 10:03:44.828655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.498 [2024-11-27 10:03:44.828709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.498 [2024-11-27 10:03:44.828722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.498 [2024-11-27 10:03:44.828729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.498 [2024-11-27 10:03:44.828736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.498 [2024-11-27 10:03:44.828751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.498 qpair failed and we were unable to recover it. 00:31:29.498 [2024-11-27 10:03:44.838634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.498 [2024-11-27 10:03:44.838684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.498 [2024-11-27 10:03:44.838697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.498 [2024-11-27 10:03:44.838705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.498 [2024-11-27 10:03:44.838711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.498 [2024-11-27 10:03:44.838725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.498 qpair failed and we were unable to recover it. 00:31:29.498 [2024-11-27 10:03:44.848633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.498 [2024-11-27 10:03:44.848682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.498 [2024-11-27 10:03:44.848698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.498 [2024-11-27 10:03:44.848705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.498 [2024-11-27 10:03:44.848711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.498 [2024-11-27 10:03:44.848725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.498 qpair failed and we were unable to recover it. 00:31:29.498 [2024-11-27 10:03:44.858747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.498 [2024-11-27 10:03:44.858803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.498 [2024-11-27 10:03:44.858816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.498 [2024-11-27 10:03:44.858823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.498 [2024-11-27 10:03:44.858829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.498 [2024-11-27 10:03:44.858843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.498 qpair failed and we were unable to recover it. 00:31:29.498 [2024-11-27 10:03:44.868768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.498 [2024-11-27 10:03:44.868821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.498 [2024-11-27 10:03:44.868834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.498 [2024-11-27 10:03:44.868841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.498 [2024-11-27 10:03:44.868847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.498 [2024-11-27 10:03:44.868862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.498 qpair failed and we were unable to recover it. 00:31:29.498 [2024-11-27 10:03:44.878749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.498 [2024-11-27 10:03:44.878792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.498 [2024-11-27 10:03:44.878804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.498 [2024-11-27 10:03:44.878811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.498 [2024-11-27 10:03:44.878818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.498 [2024-11-27 10:03:44.878832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.498 qpair failed and we were unable to recover it. 00:31:29.498 [2024-11-27 10:03:44.888758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.498 [2024-11-27 10:03:44.888803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.498 [2024-11-27 10:03:44.888816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.498 [2024-11-27 10:03:44.888824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.498 [2024-11-27 10:03:44.888830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.498 [2024-11-27 10:03:44.888847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.498 qpair failed and we were unable to recover it. 00:31:29.498 [2024-11-27 10:03:44.898825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.498 [2024-11-27 10:03:44.898882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.498 [2024-11-27 10:03:44.898895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.498 [2024-11-27 10:03:44.898902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.498 [2024-11-27 10:03:44.898908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.498 [2024-11-27 10:03:44.898922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.498 qpair failed and we were unable to recover it. 00:31:29.498 [2024-11-27 10:03:44.908896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.498 [2024-11-27 10:03:44.908952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.498 [2024-11-27 10:03:44.908977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.498 [2024-11-27 10:03:44.908986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.498 [2024-11-27 10:03:44.908993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.498 [2024-11-27 10:03:44.909012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.498 qpair failed and we were unable to recover it. 00:31:29.498 [2024-11-27 10:03:44.918859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.498 [2024-11-27 10:03:44.918914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.498 [2024-11-27 10:03:44.918938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.498 [2024-11-27 10:03:44.918946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.498 [2024-11-27 10:03:44.918953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.499 [2024-11-27 10:03:44.918973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.499 qpair failed and we were unable to recover it. 00:31:29.499 [2024-11-27 10:03:44.928761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.499 [2024-11-27 10:03:44.928807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.499 [2024-11-27 10:03:44.928824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.499 [2024-11-27 10:03:44.928832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.499 [2024-11-27 10:03:44.928839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa220000b90 00:31:29.499 [2024-11-27 10:03:44.928855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.499 qpair failed and we were unable to recover it. 00:31:29.499 [2024-11-27 10:03:44.939125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.499 [2024-11-27 10:03:44.939233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.499 [2024-11-27 10:03:44.939298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.499 [2024-11-27 10:03:44.939325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.499 [2024-11-27 10:03:44.939346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa228000b90 00:31:29.499 [2024-11-27 10:03:44.939401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:29.499 qpair failed and we were unable to recover it. 00:31:29.499 [2024-11-27 10:03:44.948982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.499 [2024-11-27 10:03:44.949071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.499 [2024-11-27 10:03:44.949105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.499 [2024-11-27 10:03:44.949122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.499 [2024-11-27 10:03:44.949138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa228000b90 00:31:29.499 [2024-11-27 10:03:44.949183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:29.499 qpair failed and we were unable to recover it. 00:31:29.499 [2024-11-27 10:03:44.958980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.499 [2024-11-27 10:03:44.959083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.499 [2024-11-27 10:03:44.959146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.499 [2024-11-27 10:03:44.959185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.499 [2024-11-27 10:03:44.959207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa21c000b90 00:31:29.499 [2024-11-27 10:03:44.959263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:29.499 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-27 10:03:44.969002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:29.759 [2024-11-27 10:03:44.969073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:29.759 [2024-11-27 10:03:44.969104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:29.759 [2024-11-27 10:03:44.969120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:29.759 [2024-11-27 10:03:44.969134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa21c000b90 00:31:29.759 [2024-11-27 10:03:44.969176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-27 10:03:44.969441] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:31:29.760 A controller has encountered a failure and is being reset. 00:31:29.760 Controller properly reset. 00:31:29.760 Initializing NVMe Controllers 00:31:29.760 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:29.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:29.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:29.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:29.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:29.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:29.760 Initialization complete. Launching workers. 00:31:29.760 Starting thread on core 1 00:31:29.760 Starting thread on core 2 00:31:29.760 Starting thread on core 3 00:31:29.760 Starting thread on core 0 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:29.760 00:31:29.760 real 0m11.443s 00:31:29.760 user 0m21.772s 00:31:29.760 sys 0m3.990s 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:29.760 ************************************ 00:31:29.760 END TEST nvmf_target_disconnect_tc2 00:31:29.760 ************************************ 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:29.760 rmmod nvme_tcp 00:31:29.760 rmmod nvme_fabrics 00:31:29.760 rmmod nvme_keyring 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 4073171 ']' 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 4073171 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 4073171 ']' 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 4073171 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:29.760 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4073171 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4073171' 00:31:30.022 killing process with pid 4073171 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 4073171 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 4073171 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.022 10:03:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.571 10:03:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:32.571 00:31:32.571 real 0m21.876s 00:31:32.571 user 0m49.628s 00:31:32.571 sys 0m10.219s 00:31:32.571 10:03:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:32.571 10:03:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:32.571 ************************************ 00:31:32.571 END TEST nvmf_target_disconnect 00:31:32.571 ************************************ 00:31:32.571 10:03:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:32.571 00:31:32.571 real 6m33.369s 00:31:32.571 user 11m34.762s 00:31:32.571 sys 2m15.793s 00:31:32.571 10:03:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:32.571 10:03:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.571 ************************************ 00:31:32.571 END TEST nvmf_host 00:31:32.571 ************************************ 00:31:32.571 10:03:47 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:32.571 10:03:47 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:32.571 10:03:47 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:32.571 10:03:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:32.571 10:03:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:32.571 10:03:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:32.571 ************************************ 00:31:32.571 START TEST nvmf_target_core_interrupt_mode 00:31:32.571 ************************************ 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:32.571 * Looking for test storage... 00:31:32.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:32.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.571 --rc genhtml_branch_coverage=1 00:31:32.571 --rc genhtml_function_coverage=1 00:31:32.571 --rc genhtml_legend=1 00:31:32.571 --rc geninfo_all_blocks=1 00:31:32.571 --rc geninfo_unexecuted_blocks=1 00:31:32.571 00:31:32.571 ' 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:32.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.571 --rc genhtml_branch_coverage=1 00:31:32.571 --rc genhtml_function_coverage=1 00:31:32.571 --rc genhtml_legend=1 00:31:32.571 --rc geninfo_all_blocks=1 00:31:32.571 --rc geninfo_unexecuted_blocks=1 00:31:32.571 00:31:32.571 ' 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:32.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.571 --rc genhtml_branch_coverage=1 00:31:32.571 --rc genhtml_function_coverage=1 00:31:32.571 --rc genhtml_legend=1 00:31:32.571 --rc geninfo_all_blocks=1 00:31:32.571 --rc geninfo_unexecuted_blocks=1 00:31:32.571 00:31:32.571 ' 00:31:32.571 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:32.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.572 --rc genhtml_branch_coverage=1 00:31:32.572 --rc genhtml_function_coverage=1 00:31:32.572 --rc genhtml_legend=1 00:31:32.572 --rc geninfo_all_blocks=1 00:31:32.572 --rc geninfo_unexecuted_blocks=1 00:31:32.572 00:31:32.572 ' 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:32.572 ************************************ 00:31:32.572 START TEST nvmf_abort 00:31:32.572 ************************************ 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:32.572 * Looking for test storage... 00:31:32.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:31:32.572 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:32.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.836 --rc genhtml_branch_coverage=1 00:31:32.836 --rc genhtml_function_coverage=1 00:31:32.836 --rc genhtml_legend=1 00:31:32.836 --rc geninfo_all_blocks=1 00:31:32.836 --rc geninfo_unexecuted_blocks=1 00:31:32.836 00:31:32.836 ' 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:32.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.836 --rc genhtml_branch_coverage=1 00:31:32.836 --rc genhtml_function_coverage=1 00:31:32.836 --rc genhtml_legend=1 00:31:32.836 --rc geninfo_all_blocks=1 00:31:32.836 --rc geninfo_unexecuted_blocks=1 00:31:32.836 00:31:32.836 ' 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:32.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.836 --rc genhtml_branch_coverage=1 00:31:32.836 --rc genhtml_function_coverage=1 00:31:32.836 --rc genhtml_legend=1 00:31:32.836 --rc geninfo_all_blocks=1 00:31:32.836 --rc geninfo_unexecuted_blocks=1 00:31:32.836 00:31:32.836 ' 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:32.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.836 --rc genhtml_branch_coverage=1 00:31:32.836 --rc genhtml_function_coverage=1 00:31:32.836 --rc genhtml_legend=1 00:31:32.836 --rc geninfo_all_blocks=1 00:31:32.836 --rc geninfo_unexecuted_blocks=1 00:31:32.836 00:31:32.836 ' 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:32.836 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:31:32.837 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:40.978 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:40.978 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:40.978 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:40.978 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:40.978 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:40.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:31:40.979 00:31:40.979 --- 10.0.0.2 ping statistics --- 00:31:40.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.979 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:31:40.979 00:31:40.979 --- 10.0.0.1 ping statistics --- 00:31:40.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.979 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=4078625 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 4078625 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 4078625 ']' 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.979 10:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:40.979 [2024-11-27 10:03:55.720111] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:40.979 [2024-11-27 10:03:55.721243] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:31:40.979 [2024-11-27 10:03:55.721293] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.979 [2024-11-27 10:03:55.819527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:40.979 [2024-11-27 10:03:55.871034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.979 [2024-11-27 10:03:55.871084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.979 [2024-11-27 10:03:55.871094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.979 [2024-11-27 10:03:55.871101] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.979 [2024-11-27 10:03:55.871107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.979 [2024-11-27 10:03:55.872935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.979 [2024-11-27 10:03:55.873097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.979 [2024-11-27 10:03:55.873098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:40.979 [2024-11-27 10:03:55.949048] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:40.979 [2024-11-27 10:03:55.950166] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:40.979 [2024-11-27 10:03:55.950711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:40.979 [2024-11-27 10:03:55.950796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:41.240 [2024-11-27 10:03:56.577990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:41.240 Malloc0 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:41.240 Delay0 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:41.240 [2024-11-27 10:03:56.681951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.240 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:41.501 [2024-11-27 10:03:56.824000] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:44.134 Initializing NVMe Controllers 00:31:44.135 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:44.135 controller IO queue size 128 less than required 00:31:44.135 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:44.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:44.135 Initialization complete. Launching workers. 00:31:44.135 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28538 00:31:44.135 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28595, failed to submit 66 00:31:44.135 success 28538, unsuccessful 57, failed 0 00:31:44.135 10:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:44.135 10:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.135 10:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:44.135 10:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.135 10:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:44.135 10:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:44.135 10:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:44.135 10:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:44.135 rmmod nvme_tcp 00:31:44.135 rmmod nvme_fabrics 00:31:44.135 rmmod nvme_keyring 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 4078625 ']' 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 4078625 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 4078625 ']' 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 4078625 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4078625 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4078625' 00:31:44.135 killing process with pid 4078625 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 4078625 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 4078625 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.135 10:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.052 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:46.052 00:31:46.052 real 0m13.518s 00:31:46.052 user 0m11.271s 00:31:46.052 sys 0m7.040s 00:31:46.052 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:46.052 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:46.052 ************************************ 00:31:46.052 END TEST nvmf_abort 00:31:46.052 ************************************ 00:31:46.052 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:46.052 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:46.053 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.053 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:46.053 ************************************ 00:31:46.053 START TEST nvmf_ns_hotplug_stress 00:31:46.053 ************************************ 00:31:46.053 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:46.315 * Looking for test storage... 00:31:46.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:46.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.315 --rc genhtml_branch_coverage=1 00:31:46.315 --rc genhtml_function_coverage=1 00:31:46.315 --rc genhtml_legend=1 00:31:46.315 --rc geninfo_all_blocks=1 00:31:46.315 --rc geninfo_unexecuted_blocks=1 00:31:46.315 00:31:46.315 ' 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:46.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.315 --rc genhtml_branch_coverage=1 00:31:46.315 --rc genhtml_function_coverage=1 00:31:46.315 --rc genhtml_legend=1 00:31:46.315 --rc geninfo_all_blocks=1 00:31:46.315 --rc geninfo_unexecuted_blocks=1 00:31:46.315 00:31:46.315 ' 00:31:46.315 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:46.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.316 --rc genhtml_branch_coverage=1 00:31:46.316 --rc genhtml_function_coverage=1 00:31:46.316 --rc genhtml_legend=1 00:31:46.316 --rc geninfo_all_blocks=1 00:31:46.316 --rc geninfo_unexecuted_blocks=1 00:31:46.316 00:31:46.316 ' 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:46.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.316 --rc genhtml_branch_coverage=1 00:31:46.316 --rc genhtml_function_coverage=1 00:31:46.316 --rc genhtml_legend=1 00:31:46.316 --rc geninfo_all_blocks=1 00:31:46.316 --rc geninfo_unexecuted_blocks=1 00:31:46.316 00:31:46.316 ' 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:31:46.316 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:54.464 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:54.465 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:54.465 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:54.465 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:54.465 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.465 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.465 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.465 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.465 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:54.465 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.465 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.465 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.465 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:54.465 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:54.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:31:54.465 00:31:54.465 --- 10.0.0.2 ping statistics --- 00:31:54.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.465 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:31:54.465 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:31:54.465 00:31:54.465 --- 10.0.0.1 ping statistics --- 00:31:54.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.465 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:31:54.465 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.465 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:31:54.465 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=4083440 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 4083440 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 4083440 ']' 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.466 10:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:54.466 [2024-11-27 10:04:09.305443] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:54.466 [2024-11-27 10:04:09.306563] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:31:54.466 [2024-11-27 10:04:09.306617] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.466 [2024-11-27 10:04:09.404473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:54.466 [2024-11-27 10:04:09.456244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.466 [2024-11-27 10:04:09.456294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.466 [2024-11-27 10:04:09.456303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.466 [2024-11-27 10:04:09.456310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.466 [2024-11-27 10:04:09.456317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.466 [2024-11-27 10:04:09.458151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.466 [2024-11-27 10:04:09.458323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:54.466 [2024-11-27 10:04:09.458509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.466 [2024-11-27 10:04:09.536239] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:54.466 [2024-11-27 10:04:09.537495] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:54.466 [2024-11-27 10:04:09.538107] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:54.466 [2024-11-27 10:04:09.538225] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:54.727 10:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.727 10:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:31:54.727 10:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:54.727 10:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.727 10:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:54.727 10:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.727 10:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:54.727 10:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:54.989 [2024-11-27 10:04:10.331464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.989 10:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:55.249 10:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:55.249 [2024-11-27 10:04:10.712279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.510 10:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:55.510 10:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:55.770 Malloc0 00:31:55.770 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:56.032 Delay0 00:31:56.032 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:56.032 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:56.294 NULL1 00:31:56.294 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:56.555 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4083998 00:31:56.555 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:31:56.555 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:56.555 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.816 10:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:57.078 10:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:57.078 10:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:57.078 true 00:31:57.078 10:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:31:57.078 10:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.340 10:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:57.603 10:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:57.603 10:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:57.864 true 00:31:57.864 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:31:57.864 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.864 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:58.126 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:58.126 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:58.387 true 00:31:58.387 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:31:58.387 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.648 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:58.910 10:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:58.910 10:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:58.910 true 00:31:58.910 10:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:31:58.910 10:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.171 10:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:59.432 10:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:59.432 10:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:59.432 true 00:31:59.693 10:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:31:59.693 10:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.693 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:59.952 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:59.952 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:00.213 true 00:32:00.213 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:00.213 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.213 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.473 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:00.473 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:00.733 true 00:32:00.733 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:00.733 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.993 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.993 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:00.993 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:01.262 true 00:32:01.262 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:01.262 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:01.523 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:01.523 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:32:01.523 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:01.782 true 00:32:01.782 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:01.782 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.042 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:02.042 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:32:02.042 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:02.302 true 00:32:02.302 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:02.302 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.561 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:02.822 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:32:02.822 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:02.822 true 00:32:02.822 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:02.822 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:03.083 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:03.343 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:32:03.343 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:03.343 true 00:32:03.343 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:03.343 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:03.604 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:03.877 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:32:03.878 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:03.878 true 00:32:03.878 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:03.878 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.145 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:04.405 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:32:04.405 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:04.405 true 00:32:04.664 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:04.664 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.664 10:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:04.925 10:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:32:04.925 10:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:05.185 true 00:32:05.185 10:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:05.185 10:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:05.185 10:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:05.446 10:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:05.446 10:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:05.706 true 00:32:05.706 10:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:05.706 10:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:05.968 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:05.968 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:05.968 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:06.229 true 00:32:06.229 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:06.229 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:06.489 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:06.489 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:06.489 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:06.749 true 00:32:06.749 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:06.749 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:07.009 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:07.269 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:07.269 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:07.269 true 00:32:07.269 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:07.269 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:07.530 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:07.790 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:07.790 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:07.790 true 00:32:07.790 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:07.790 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:08.050 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:08.311 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:08.311 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:08.311 true 00:32:08.311 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:08.311 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:08.571 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:08.831 10:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:08.831 10:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:08.831 true 00:32:09.091 10:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:09.091 10:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:09.091 10:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:09.351 10:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:09.351 10:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:09.612 true 00:32:09.612 10:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:09.612 10:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:09.872 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:09.872 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:09.872 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:32:10.132 true 00:32:10.132 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:10.132 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:10.392 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:10.392 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:32:10.392 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:32:10.653 true 00:32:10.653 10:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:10.653 10:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:10.913 10:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:11.174 10:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:32:11.174 10:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:32:11.174 true 00:32:11.174 10:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:11.174 10:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:11.434 10:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:11.694 10:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:32:11.694 10:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:32:11.694 true 00:32:11.694 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:11.694 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:11.956 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:12.218 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:32:12.218 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:32:12.218 true 00:32:12.218 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:12.219 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:12.478 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:12.738 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:32:12.738 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:32:12.738 true 00:32:12.999 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:13.000 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.000 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.260 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:32:13.260 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:32:13.521 true 00:32:13.521 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:13.521 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.521 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.781 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:32:13.781 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:32:14.041 true 00:32:14.041 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:14.041 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:14.302 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:14.302 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:32:14.302 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:32:14.564 true 00:32:14.564 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:14.564 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:14.826 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:14.826 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:32:14.826 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:32:15.088 true 00:32:15.088 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:15.088 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:15.349 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:15.349 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:32:15.349 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:32:15.610 true 00:32:15.610 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:15.610 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:15.871 10:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:16.132 10:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:32:16.132 10:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:32:16.132 true 00:32:16.132 10:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:16.132 10:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.393 10:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:16.654 10:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:32:16.654 10:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:32:16.654 true 00:32:16.654 10:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:16.654 10:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.916 10:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:17.177 10:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:32:17.177 10:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:32:17.177 true 00:32:17.438 10:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:17.438 10:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.438 10:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:17.699 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:32:17.699 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:32:17.959 true 00:32:17.959 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:17.959 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.959 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:18.220 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:32:18.220 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:32:18.480 true 00:32:18.480 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:18.480 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.742 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:18.742 10:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:32:18.742 10:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:32:19.003 true 00:32:19.003 10:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:19.003 10:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.263 10:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:19.523 10:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:32:19.523 10:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:32:19.523 true 00:32:19.523 10:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:19.523 10:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.783 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:20.043 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:32:20.043 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:32:20.043 true 00:32:20.043 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:20.043 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:20.303 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:20.563 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:32:20.563 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:32:20.563 true 00:32:20.563 10:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:20.563 10:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:20.823 10:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:21.084 10:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:32:21.084 10:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:32:21.344 true 00:32:21.344 10:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:21.344 10:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:21.344 10:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:21.604 10:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:32:21.604 10:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:32:21.864 true 00:32:21.864 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:21.864 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:21.864 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:22.125 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:32:22.125 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:32:22.386 true 00:32:22.386 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:22.386 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:22.646 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:22.646 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:32:22.646 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:32:22.907 true 00:32:22.907 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:22.907 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:23.167 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:23.167 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:32:23.167 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:32:23.428 true 00:32:23.428 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:23.428 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:23.688 10:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:23.948 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:32:23.948 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:32:23.948 true 00:32:23.948 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:23.948 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:24.208 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:24.469 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:32:24.469 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:32:24.469 true 00:32:24.469 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:24.469 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:24.729 10:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:24.989 10:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:32:24.989 10:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:32:24.989 true 00:32:25.250 10:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:25.250 10:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:25.250 10:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:25.511 10:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:32:25.511 10:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:32:25.771 true 00:32:25.771 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:25.771 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:25.771 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.033 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:32:26.033 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:32:26.294 true 00:32:26.294 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:26.294 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.554 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.554 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:32:26.554 10:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:32:26.816 true 00:32:26.816 Initializing NVMe Controllers 00:32:26.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:26.816 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:32:26.816 Controller IO queue size 128, less than required. 00:32:26.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:26.816 WARNING: Some requested NVMe devices were skipped 00:32:26.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:26.816 Initialization complete. Launching workers. 00:32:26.816 ======================================================== 00:32:26.816 Latency(us) 00:32:26.816 Device Information : IOPS MiB/s Average min max 00:32:26.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30477.17 14.88 4199.87 1123.16 11162.96 00:32:26.816 ======================================================== 00:32:26.816 Total : 30477.17 14.88 4199.87 1123.16 11162.96 00:32:26.816 00:32:26.816 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4083998 00:32:26.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4083998) - No such process 00:32:26.816 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4083998 00:32:26.816 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:27.076 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:27.076 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:27.076 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:27.076 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:27.076 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:27.076 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:27.337 null0 00:32:27.337 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:27.337 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:27.337 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:27.599 null1 00:32:27.599 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:27.599 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:27.599 10:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:27.599 null2 00:32:27.599 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:27.599 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:27.599 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:27.859 null3 00:32:27.859 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:27.859 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:27.859 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:28.121 null4 00:32:28.121 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:28.121 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:28.121 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:28.121 null5 00:32:28.381 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:28.381 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:28.381 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:28.381 null6 00:32:28.381 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:28.381 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:28.381 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:28.659 null7 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4090186 4090187 4090189 4090192 4090193 4090195 4090197 4090199 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.659 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:28.961 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:28.962 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:28.962 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:29.233 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.233 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.233 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:29.233 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.233 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.233 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:29.233 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.234 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:29.234 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:29.234 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:29.234 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:29.234 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:29.234 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:29.234 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:29.494 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:29.755 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:30.016 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:30.016 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:30.016 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:30.016 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:30.016 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:30.016 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:30.016 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:30.016 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:30.016 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.016 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.016 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.276 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:30.277 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:30.277 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:30.277 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.537 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:30.797 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:30.797 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:30.797 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:30.797 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:30.797 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:30.797 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.797 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.797 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:30.797 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:30.797 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:30.797 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.798 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.798 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:30.798 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:30.798 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:30.798 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:31.058 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:31.318 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:31.319 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.319 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.319 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:31.319 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.319 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.319 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.579 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:31.579 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:31.840 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:32.100 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.360 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:32.621 rmmod nvme_tcp 00:32:32.621 rmmod nvme_fabrics 00:32:32.621 rmmod nvme_keyring 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 4083440 ']' 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 4083440 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 4083440 ']' 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 4083440 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:32.621 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4083440 00:32:32.621 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:32.621 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:32.621 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4083440' 00:32:32.621 killing process with pid 4083440 00:32:32.621 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 4083440 00:32:32.621 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 4083440 00:32:32.882 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:32.882 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:32.882 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:32.882 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:32.882 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:32:32.882 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:32.882 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:32:32.882 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:32.882 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:32.882 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.882 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.882 10:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.796 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:34.796 00:32:34.796 real 0m48.789s 00:32:34.796 user 3m2.450s 00:32:34.796 sys 0m22.175s 00:32:34.796 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:34.796 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:34.796 ************************************ 00:32:34.796 END TEST nvmf_ns_hotplug_stress 00:32:34.796 ************************************ 00:32:35.058 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:35.058 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:35.058 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:35.058 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:35.058 ************************************ 00:32:35.058 START TEST nvmf_delete_subsystem 00:32:35.058 ************************************ 00:32:35.058 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:35.058 * Looking for test storage... 00:32:35.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:35.058 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:35.058 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:32:35.058 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:35.319 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:35.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.320 --rc genhtml_branch_coverage=1 00:32:35.320 --rc genhtml_function_coverage=1 00:32:35.320 --rc genhtml_legend=1 00:32:35.320 --rc geninfo_all_blocks=1 00:32:35.320 --rc geninfo_unexecuted_blocks=1 00:32:35.320 00:32:35.320 ' 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:35.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.320 --rc genhtml_branch_coverage=1 00:32:35.320 --rc genhtml_function_coverage=1 00:32:35.320 --rc genhtml_legend=1 00:32:35.320 --rc geninfo_all_blocks=1 00:32:35.320 --rc geninfo_unexecuted_blocks=1 00:32:35.320 00:32:35.320 ' 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:35.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.320 --rc genhtml_branch_coverage=1 00:32:35.320 --rc genhtml_function_coverage=1 00:32:35.320 --rc genhtml_legend=1 00:32:35.320 --rc geninfo_all_blocks=1 00:32:35.320 --rc geninfo_unexecuted_blocks=1 00:32:35.320 00:32:35.320 ' 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:35.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.320 --rc genhtml_branch_coverage=1 00:32:35.320 --rc genhtml_function_coverage=1 00:32:35.320 --rc genhtml_legend=1 00:32:35.320 --rc geninfo_all_blocks=1 00:32:35.320 --rc geninfo_unexecuted_blocks=1 00:32:35.320 00:32:35.320 ' 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:35.320 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:35.321 10:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.460 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:43.461 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:43.461 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:43.461 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:43.461 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:43.462 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:43.462 10:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:43.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:32:43.462 00:32:43.462 --- 10.0.0.2 ping statistics --- 00:32:43.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.462 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:43.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:32:43.462 00:32:43.462 --- 10.0.0.1 ping statistics --- 00:32:43.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.462 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=4095351 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 4095351 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 4095351 ']' 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.462 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.462 [2024-11-27 10:04:58.127655] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:43.462 [2024-11-27 10:04:58.128795] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:32:43.462 [2024-11-27 10:04:58.128844] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.462 [2024-11-27 10:04:58.228237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:43.463 [2024-11-27 10:04:58.278756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.463 [2024-11-27 10:04:58.278806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.463 [2024-11-27 10:04:58.278814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.463 [2024-11-27 10:04:58.278822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.463 [2024-11-27 10:04:58.278829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.463 [2024-11-27 10:04:58.280584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.463 [2024-11-27 10:04:58.280588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.463 [2024-11-27 10:04:58.357230] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:43.463 [2024-11-27 10:04:58.357806] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:43.463 [2024-11-27 10:04:58.358108] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:43.724 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:43.724 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:32:43.724 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:43.724 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:43.724 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.724 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.724 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:43.724 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.724 10:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.724 [2024-11-27 10:04:58.989616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.724 [2024-11-27 10:04:59.022083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.724 NULL1 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.724 Delay0 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4095404 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:43.724 10:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:43.724 [2024-11-27 10:04:59.146246] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:45.637 10:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:45.637 10:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.637 10:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:45.898 Write completed with error (sct=0, sc=8) 00:32:45.898 starting I/O failed: -6 00:32:45.898 Read completed with error (sct=0, sc=8) 00:32:45.898 Read completed with error (sct=0, sc=8) 00:32:45.898 Read completed with error (sct=0, sc=8) 00:32:45.898 Write completed with error (sct=0, sc=8) 00:32:45.898 starting I/O failed: -6 00:32:45.898 Read completed with error (sct=0, sc=8) 00:32:45.898 Write completed with error (sct=0, sc=8) 00:32:45.898 Read completed with error (sct=0, sc=8) 00:32:45.898 Read completed with error (sct=0, sc=8) 00:32:45.898 starting I/O failed: -6 00:32:45.898 Read completed with error (sct=0, sc=8) 00:32:45.898 Read completed with error (sct=0, sc=8) 00:32:45.898 Read completed with error (sct=0, sc=8) 00:32:45.898 Read completed with error (sct=0, sc=8) 00:32:45.898 starting I/O failed: -6 00:32:45.898 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 [2024-11-27 10:05:01.311864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d680 is same with the state(6) to be set 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 [2024-11-27 10:05:01.312447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d2c0 is same with the state(6) to be set 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 starting I/O failed: -6 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 [2024-11-27 10:05:01.315644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f999000d490 is same with the state(6) to be set 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Write completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.899 Read completed with error (sct=0, sc=8) 00:32:45.900 Read completed with error (sct=0, sc=8) 00:32:45.900 Write completed with error (sct=0, sc=8) 00:32:45.900 Read completed with error (sct=0, sc=8) 00:32:45.900 Read completed with error (sct=0, sc=8) 00:32:46.844 [2024-11-27 10:05:02.285689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232e9a0 is same with the state(6) to be set 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 [2024-11-27 10:05:02.318305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d4a0 is same with the state(6) to be set 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 [2024-11-27 10:05:02.318567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d860 is same with the state(6) to be set 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 [2024-11-27 10:05:02.319646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f999000d020 is same with the state(6) to be set 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 Read completed with error (sct=0, sc=8) 00:32:47.106 Write completed with error (sct=0, sc=8) 00:32:47.106 [2024-11-27 10:05:02.319749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f999000d7c0 is same with the state(6) to be set 00:32:47.106 Initializing NVMe Controllers 00:32:47.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:47.106 Controller IO queue size 128, less than required. 00:32:47.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:47.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:47.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:47.106 Initialization complete. Launching workers. 00:32:47.106 ======================================================== 00:32:47.106 Latency(us) 00:32:47.106 Device Information : IOPS MiB/s Average min max 00:32:47.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 185.00 0.09 902389.16 600.37 1011240.29 00:32:47.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.18 0.07 940259.61 275.71 1012532.36 00:32:47.106 ======================================================== 00:32:47.106 Total : 336.18 0.16 919419.66 275.71 1012532.36 00:32:47.106 00:32:47.106 [2024-11-27 10:05:02.320655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232e9a0 (9): Bad file descriptor 00:32:47.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:47.107 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.107 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:47.107 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4095404 00:32:47.107 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4095404 00:32:47.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4095404) - No such process 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4095404 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4095404 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 4095404 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:47.369 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:47.630 [2024-11-27 10:05:02.853974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4096122 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4096122 00:32:47.630 10:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:47.630 [2024-11-27 10:05:02.961867] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:48.202 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:48.202 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4096122 00:32:48.202 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:48.462 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:48.462 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4096122 00:32:48.462 10:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:49.034 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:49.034 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4096122 00:32:49.034 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:49.607 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:49.607 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4096122 00:32:49.607 10:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:50.178 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:50.178 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4096122 00:32:50.178 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:50.439 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:50.439 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4096122 00:32:50.439 10:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:50.700 Initializing NVMe Controllers 00:32:50.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:50.700 Controller IO queue size 128, less than required. 00:32:50.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:50.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:50.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:50.700 Initialization complete. Launching workers. 00:32:50.700 ======================================================== 00:32:50.700 Latency(us) 00:32:50.700 Device Information : IOPS MiB/s Average min max 00:32:50.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002847.24 1000305.95 1006847.18 00:32:50.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005197.14 1000388.85 1012398.47 00:32:50.700 ======================================================== 00:32:50.700 Total : 256.00 0.12 1004022.19 1000305.95 1012398.47 00:32:50.700 00:32:50.961 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:50.961 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4096122 00:32:50.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4096122) - No such process 00:32:50.961 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4096122 00:32:50.961 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:50.961 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:50.961 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:50.961 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:50.961 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:50.961 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:50.961 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:50.961 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:50.961 rmmod nvme_tcp 00:32:51.222 rmmod nvme_fabrics 00:32:51.222 rmmod nvme_keyring 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 4095351 ']' 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 4095351 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 4095351 ']' 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 4095351 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4095351 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4095351' 00:32:51.222 killing process with pid 4095351 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 4095351 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 4095351 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.222 10:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:53.769 00:32:53.769 real 0m18.386s 00:32:53.769 user 0m26.554s 00:32:53.769 sys 0m7.519s 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:53.769 ************************************ 00:32:53.769 END TEST nvmf_delete_subsystem 00:32:53.769 ************************************ 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:53.769 ************************************ 00:32:53.769 START TEST nvmf_host_management 00:32:53.769 ************************************ 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:53.769 * Looking for test storage... 00:32:53.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:53.769 10:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:53.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.769 --rc genhtml_branch_coverage=1 00:32:53.769 --rc genhtml_function_coverage=1 00:32:53.769 --rc genhtml_legend=1 00:32:53.769 --rc geninfo_all_blocks=1 00:32:53.769 --rc geninfo_unexecuted_blocks=1 00:32:53.769 00:32:53.769 ' 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:53.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.769 --rc genhtml_branch_coverage=1 00:32:53.769 --rc genhtml_function_coverage=1 00:32:53.769 --rc genhtml_legend=1 00:32:53.769 --rc geninfo_all_blocks=1 00:32:53.769 --rc geninfo_unexecuted_blocks=1 00:32:53.769 00:32:53.769 ' 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:53.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.769 --rc genhtml_branch_coverage=1 00:32:53.769 --rc genhtml_function_coverage=1 00:32:53.769 --rc genhtml_legend=1 00:32:53.769 --rc geninfo_all_blocks=1 00:32:53.769 --rc geninfo_unexecuted_blocks=1 00:32:53.769 00:32:53.769 ' 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:53.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.769 --rc genhtml_branch_coverage=1 00:32:53.769 --rc genhtml_function_coverage=1 00:32:53.769 --rc genhtml_legend=1 00:32:53.769 --rc geninfo_all_blocks=1 00:32:53.769 --rc geninfo_unexecuted_blocks=1 00:32:53.769 00:32:53.769 ' 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.769 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:53.770 10:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:01.914 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:01.914 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:33:01.914 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:01.914 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:01.914 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:01.914 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:01.914 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:01.914 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:33:01.914 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:01.914 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:01.915 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:01.915 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:01.915 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:01.915 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:01.915 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:01.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:01.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:33:01.916 00:33:01.916 --- 10.0.0.2 ping statistics --- 00:33:01.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.916 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:01.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:01.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:33:01.916 00:33:01.916 --- 10.0.0.1 ping statistics --- 00:33:01.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.916 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=4101054 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 4101054 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4101054 ']' 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:01.916 10:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:01.916 [2024-11-27 10:05:16.617690] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:01.916 [2024-11-27 10:05:16.618825] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:33:01.916 [2024-11-27 10:05:16.618876] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:01.916 [2024-11-27 10:05:16.719594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:01.916 [2024-11-27 10:05:16.772443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:01.916 [2024-11-27 10:05:16.772494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:01.916 [2024-11-27 10:05:16.772504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:01.916 [2024-11-27 10:05:16.772511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:01.916 [2024-11-27 10:05:16.772517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:01.916 [2024-11-27 10:05:16.774533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:01.916 [2024-11-27 10:05:16.774696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:01.916 [2024-11-27 10:05:16.774859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.916 [2024-11-27 10:05:16.774859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:01.916 [2024-11-27 10:05:16.851421] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:01.916 [2024-11-27 10:05:16.852428] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:01.916 [2024-11-27 10:05:16.852765] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:01.916 [2024-11-27 10:05:16.853443] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:01.916 [2024-11-27 10:05:16.853497] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:02.180 [2024-11-27 10:05:17.479734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:02.180 Malloc0 00:33:02.180 [2024-11-27 10:05:17.580022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4101232 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4101232 /var/tmp/bdevperf.sock 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4101232 ']' 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:02.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:02.180 { 00:33:02.180 "params": { 00:33:02.180 "name": "Nvme$subsystem", 00:33:02.180 "trtype": "$TEST_TRANSPORT", 00:33:02.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:02.180 "adrfam": "ipv4", 00:33:02.180 "trsvcid": "$NVMF_PORT", 00:33:02.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:02.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:02.180 "hdgst": ${hdgst:-false}, 00:33:02.180 "ddgst": ${ddgst:-false} 00:33:02.180 }, 00:33:02.180 "method": "bdev_nvme_attach_controller" 00:33:02.180 } 00:33:02.180 EOF 00:33:02.180 )") 00:33:02.180 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:02.441 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:02.441 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:02.441 10:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:02.441 "params": { 00:33:02.441 "name": "Nvme0", 00:33:02.441 "trtype": "tcp", 00:33:02.441 "traddr": "10.0.0.2", 00:33:02.441 "adrfam": "ipv4", 00:33:02.441 "trsvcid": "4420", 00:33:02.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:02.441 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:02.441 "hdgst": false, 00:33:02.441 "ddgst": false 00:33:02.441 }, 00:33:02.441 "method": "bdev_nvme_attach_controller" 00:33:02.441 }' 00:33:02.441 [2024-11-27 10:05:17.691718] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:33:02.441 [2024-11-27 10:05:17.691791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4101232 ] 00:33:02.441 [2024-11-27 10:05:17.785705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.441 [2024-11-27 10:05:17.839714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.012 Running I/O for 10 seconds... 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:03.274 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.275 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:33:03.275 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:33:03.275 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:33:03.275 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:33:03.275 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:33:03.275 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:03.275 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.275 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:03.275 [2024-11-27 10:05:18.597285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.275 [2024-11-27 10:05:18.597805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc2a0 is same with the state(6) to be set 00:33:03.276 [2024-11-27 10:05:18.597883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.597939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.597971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.597979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.597989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.597998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.276 [2024-11-27 10:05:18.598514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.276 [2024-11-27 10:05:18.598521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.598989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.598997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.599006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.599014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.599024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.599031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.599041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.599048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.277 [2024-11-27 10:05:18.599058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.277 [2024-11-27 10:05:18.599066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.278 [2024-11-27 10:05:18.599075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.278 [2024-11-27 10:05:18.599085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.278 [2024-11-27 10:05:18.599094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5210 is same with the state(6) to be set 00:33:03.278 [2024-11-27 10:05:18.600412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:03.278 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.278 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:03.278 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.278 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:03.278 task offset: 81920 on job bdev=Nvme0n1 fails 00:33:03.278 00:33:03.278 Latency(us) 00:33:03.278 [2024-11-27T09:05:18.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.278 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:03.278 Job: Nvme0n1 ended in about 0.43 seconds with error 00:33:03.278 Verification LBA range: start 0x0 length 0x400 00:33:03.278 Nvme0n1 : 0.43 1485.69 92.86 148.57 0.00 37967.84 4478.29 33860.27 00:33:03.278 [2024-11-27T09:05:18.744Z] =================================================================================================================== 00:33:03.278 [2024-11-27T09:05:18.744Z] Total : 1485.69 92.86 148.57 0.00 37967.84 4478.29 33860.27 00:33:03.278 [2024-11-27 10:05:18.602677] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:03.278 [2024-11-27 10:05:18.602716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9c000 (9): Bad file descriptor 00:33:03.278 [2024-11-27 10:05:18.604341] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:33:03.278 [2024-11-27 10:05:18.604437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:03.278 [2024-11-27 10:05:18.604483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.278 [2024-11-27 10:05:18.604498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:33:03.278 [2024-11-27 10:05:18.604508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:33:03.278 [2024-11-27 10:05:18.604516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.278 [2024-11-27 10:05:18.604524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa9c000 00:33:03.278 [2024-11-27 10:05:18.604549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9c000 (9): Bad file descriptor 00:33:03.278 [2024-11-27 10:05:18.604564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:03.278 [2024-11-27 10:05:18.604572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:03.278 [2024-11-27 10:05:18.604582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:03.278 [2024-11-27 10:05:18.604593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:03.278 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.278 10:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:33:04.220 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4101232 00:33:04.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4101232) - No such process 00:33:04.220 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:33:04.220 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:33:04.220 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:04.220 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:33:04.220 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:04.220 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:04.220 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:04.220 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:04.220 { 00:33:04.220 "params": { 00:33:04.220 "name": "Nvme$subsystem", 00:33:04.220 "trtype": "$TEST_TRANSPORT", 00:33:04.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:04.220 "adrfam": "ipv4", 00:33:04.220 "trsvcid": "$NVMF_PORT", 00:33:04.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:04.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:04.220 "hdgst": ${hdgst:-false}, 00:33:04.220 "ddgst": ${ddgst:-false} 00:33:04.220 }, 00:33:04.220 "method": "bdev_nvme_attach_controller" 00:33:04.220 } 00:33:04.220 EOF 00:33:04.221 )") 00:33:04.221 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:04.221 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:04.221 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:04.221 10:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:04.221 "params": { 00:33:04.221 "name": "Nvme0", 00:33:04.221 "trtype": "tcp", 00:33:04.221 "traddr": "10.0.0.2", 00:33:04.221 "adrfam": "ipv4", 00:33:04.221 "trsvcid": "4420", 00:33:04.221 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:04.221 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:04.221 "hdgst": false, 00:33:04.221 "ddgst": false 00:33:04.221 }, 00:33:04.221 "method": "bdev_nvme_attach_controller" 00:33:04.221 }' 00:33:04.221 [2024-11-27 10:05:19.675667] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:33:04.221 [2024-11-27 10:05:19.675749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4101638 ] 00:33:04.482 [2024-11-27 10:05:19.772588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.482 [2024-11-27 10:05:19.811060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.743 Running I/O for 1 seconds... 00:33:05.683 1833.00 IOPS, 114.56 MiB/s 00:33:05.683 Latency(us) 00:33:05.683 [2024-11-27T09:05:21.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.683 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:05.683 Verification LBA range: start 0x0 length 0x400 00:33:05.683 Nvme0n1 : 1.01 1872.58 117.04 0.00 0.00 33435.02 2935.47 34297.17 00:33:05.683 [2024-11-27T09:05:21.149Z] =================================================================================================================== 00:33:05.683 [2024-11-27T09:05:21.149Z] Total : 1872.58 117.04 0.00 0.00 33435.02 2935.47 34297.17 00:33:05.683 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:33:05.683 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:33:05.683 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:05.683 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:05.683 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:33:05.683 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:05.683 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:33:05.683 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.683 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:33:05.683 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.683 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.945 rmmod nvme_tcp 00:33:05.945 rmmod nvme_fabrics 00:33:05.945 rmmod nvme_keyring 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 4101054 ']' 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 4101054 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 4101054 ']' 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 4101054 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4101054 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4101054' 00:33:05.945 killing process with pid 4101054 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 4101054 00:33:05.945 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 4101054 00:33:05.945 [2024-11-27 10:05:21.385539] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:33:06.206 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:06.206 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:06.206 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:06.206 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:33:06.207 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:33:06.207 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:06.207 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:33:06.207 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:06.207 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:06.207 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.207 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.207 10:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.120 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:08.120 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:08.120 00:33:08.120 real 0m14.680s 00:33:08.120 user 0m19.298s 00:33:08.120 sys 0m7.469s 00:33:08.120 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.120 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:08.120 ************************************ 00:33:08.120 END TEST nvmf_host_management 00:33:08.120 ************************************ 00:33:08.120 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:08.120 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:08.120 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:08.120 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:08.120 ************************************ 00:33:08.120 START TEST nvmf_lvol 00:33:08.120 ************************************ 00:33:08.120 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:08.380 * Looking for test storage... 00:33:08.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:08.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.380 --rc genhtml_branch_coverage=1 00:33:08.380 --rc genhtml_function_coverage=1 00:33:08.380 --rc genhtml_legend=1 00:33:08.380 --rc geninfo_all_blocks=1 00:33:08.380 --rc geninfo_unexecuted_blocks=1 00:33:08.380 00:33:08.380 ' 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:08.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.380 --rc genhtml_branch_coverage=1 00:33:08.380 --rc genhtml_function_coverage=1 00:33:08.380 --rc genhtml_legend=1 00:33:08.380 --rc geninfo_all_blocks=1 00:33:08.380 --rc geninfo_unexecuted_blocks=1 00:33:08.380 00:33:08.380 ' 00:33:08.380 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:08.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.381 --rc genhtml_branch_coverage=1 00:33:08.381 --rc genhtml_function_coverage=1 00:33:08.381 --rc genhtml_legend=1 00:33:08.381 --rc geninfo_all_blocks=1 00:33:08.381 --rc geninfo_unexecuted_blocks=1 00:33:08.381 00:33:08.381 ' 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:08.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.381 --rc genhtml_branch_coverage=1 00:33:08.381 --rc genhtml_function_coverage=1 00:33:08.381 --rc genhtml_legend=1 00:33:08.381 --rc geninfo_all_blocks=1 00:33:08.381 --rc geninfo_unexecuted_blocks=1 00:33:08.381 00:33:08.381 ' 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:33:08.381 10:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:16.524 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:16.524 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:16.524 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:16.525 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:16.525 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.525 10:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:16.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:33:16.525 00:33:16.525 --- 10.0.0.2 ping statistics --- 00:33:16.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.525 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:33:16.525 00:33:16.525 --- 10.0.0.1 ping statistics --- 00:33:16.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.525 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4106111 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4106111 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 4106111 ']' 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.525 10:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:16.525 [2024-11-27 10:05:31.349087] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:16.525 [2024-11-27 10:05:31.350207] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:33:16.525 [2024-11-27 10:05:31.350256] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.525 [2024-11-27 10:05:31.450490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:16.525 [2024-11-27 10:05:31.503017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.525 [2024-11-27 10:05:31.503065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.525 [2024-11-27 10:05:31.503074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.525 [2024-11-27 10:05:31.503081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.525 [2024-11-27 10:05:31.503088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.525 [2024-11-27 10:05:31.504813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.525 [2024-11-27 10:05:31.504970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.525 [2024-11-27 10:05:31.504970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.525 [2024-11-27 10:05:31.581309] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:16.525 [2024-11-27 10:05:31.582435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:16.525 [2024-11-27 10:05:31.582850] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:16.525 [2024-11-27 10:05:31.582977] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:16.786 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:16.786 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:33:16.786 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:16.786 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:16.786 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:16.786 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.786 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:17.047 [2024-11-27 10:05:32.381845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.047 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:17.308 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:17.308 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:17.570 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:17.570 10:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:17.831 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:17.831 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9ccce2bd-4ef0-4fe1-bda5-0dec5031c0db 00:33:17.831 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9ccce2bd-4ef0-4fe1-bda5-0dec5031c0db lvol 20 00:33:18.092 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f59ff4d2-2cbc-4b9c-abf6-adc473231e08 00:33:18.092 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:18.353 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f59ff4d2-2cbc-4b9c-abf6-adc473231e08 00:33:18.614 10:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:18.614 [2024-11-27 10:05:33.989825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.614 10:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:18.876 10:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4106672 00:33:18.876 10:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:18.876 10:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:19.817 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f59ff4d2-2cbc-4b9c-abf6-adc473231e08 MY_SNAPSHOT 00:33:20.078 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=889e42f0-5ffd-45f5-a230-2306d00bef7f 00:33:20.078 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f59ff4d2-2cbc-4b9c-abf6-adc473231e08 30 00:33:20.338 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 889e42f0-5ffd-45f5-a230-2306d00bef7f MY_CLONE 00:33:20.599 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8c8531d0-9af4-45c0-91c3-c1c17ca763a5 00:33:20.599 10:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8c8531d0-9af4-45c0-91c3-c1c17ca763a5 00:33:21.171 10:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4106672 00:33:29.428 Initializing NVMe Controllers 00:33:29.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:29.428 Controller IO queue size 128, less than required. 00:33:29.428 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:29.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:29.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:29.428 Initialization complete. Launching workers. 00:33:29.428 ======================================================== 00:33:29.428 Latency(us) 00:33:29.428 Device Information : IOPS MiB/s Average min max 00:33:29.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15297.70 59.76 8369.27 1793.26 54529.21 00:33:29.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15406.80 60.18 8307.62 1280.99 98843.59 00:33:29.428 ======================================================== 00:33:29.428 Total : 30704.50 119.94 8338.34 1280.99 98843.59 00:33:29.428 00:33:29.428 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:29.428 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f59ff4d2-2cbc-4b9c-abf6-adc473231e08 00:33:29.428 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9ccce2bd-4ef0-4fe1-bda5-0dec5031c0db 00:33:29.689 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:33:29.689 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:33:29.689 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:33:29.689 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:29.689 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:33:29.689 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:29.690 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:33:29.690 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:29.690 10:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:29.690 rmmod nvme_tcp 00:33:29.690 rmmod nvme_fabrics 00:33:29.690 rmmod nvme_keyring 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4106111 ']' 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4106111 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 4106111 ']' 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 4106111 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4106111 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4106111' 00:33:29.690 killing process with pid 4106111 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 4106111 00:33:29.690 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 4106111 00:33:29.951 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:29.951 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:29.951 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:29.951 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:33:29.951 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:33:29.951 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:29.951 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:33:29.951 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:29.951 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:29.951 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.951 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:29.951 10:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.865 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:31.865 00:33:31.865 real 0m23.719s 00:33:31.865 user 0m55.277s 00:33:31.865 sys 0m10.796s 00:33:31.865 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:31.865 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:31.865 ************************************ 00:33:31.865 END TEST nvmf_lvol 00:33:31.865 ************************************ 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:32.126 ************************************ 00:33:32.126 START TEST nvmf_lvs_grow 00:33:32.126 ************************************ 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:32.126 * Looking for test storage... 00:33:32.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:32.126 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:32.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.127 --rc genhtml_branch_coverage=1 00:33:32.127 --rc genhtml_function_coverage=1 00:33:32.127 --rc genhtml_legend=1 00:33:32.127 --rc geninfo_all_blocks=1 00:33:32.127 --rc geninfo_unexecuted_blocks=1 00:33:32.127 00:33:32.127 ' 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:32.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.127 --rc genhtml_branch_coverage=1 00:33:32.127 --rc genhtml_function_coverage=1 00:33:32.127 --rc genhtml_legend=1 00:33:32.127 --rc geninfo_all_blocks=1 00:33:32.127 --rc geninfo_unexecuted_blocks=1 00:33:32.127 00:33:32.127 ' 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:32.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.127 --rc genhtml_branch_coverage=1 00:33:32.127 --rc genhtml_function_coverage=1 00:33:32.127 --rc genhtml_legend=1 00:33:32.127 --rc geninfo_all_blocks=1 00:33:32.127 --rc geninfo_unexecuted_blocks=1 00:33:32.127 00:33:32.127 ' 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:32.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.127 --rc genhtml_branch_coverage=1 00:33:32.127 --rc genhtml_function_coverage=1 00:33:32.127 --rc genhtml_legend=1 00:33:32.127 --rc geninfo_all_blocks=1 00:33:32.127 --rc geninfo_unexecuted_blocks=1 00:33:32.127 00:33:32.127 ' 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:32.127 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.390 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:33:32.391 10:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:40.538 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:40.538 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:40.539 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:40.539 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:40.539 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:40.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:40.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:33:40.539 00:33:40.539 --- 10.0.0.2 ping statistics --- 00:33:40.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.539 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:40.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:40.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:33:40.539 00:33:40.539 --- 10.0.0.1 ping statistics --- 00:33:40.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.539 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:40.539 10:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:40.539 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:40.539 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:40.539 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:40.539 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:40.539 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4112819 00:33:40.539 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4112819 00:33:40.539 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:40.539 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 4112819 ']' 00:33:40.539 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.539 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:40.539 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.539 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:40.539 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:40.539 [2024-11-27 10:05:55.113118] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:40.539 [2024-11-27 10:05:55.114268] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:33:40.539 [2024-11-27 10:05:55.114321] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.539 [2024-11-27 10:05:55.200740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.539 [2024-11-27 10:05:55.253181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.540 [2024-11-27 10:05:55.253235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.540 [2024-11-27 10:05:55.253244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.540 [2024-11-27 10:05:55.253251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.540 [2024-11-27 10:05:55.253258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.540 [2024-11-27 10:05:55.254006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.540 [2024-11-27 10:05:55.331419] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:40.540 [2024-11-27 10:05:55.331716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:40.540 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.540 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:33:40.540 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:40.540 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:40.540 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:40.540 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.540 10:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:40.802 [2024-11-27 10:05:56.150870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:40.802 ************************************ 00:33:40.802 START TEST lvs_grow_clean 00:33:40.802 ************************************ 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:40.802 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:41.064 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:41.064 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:41.324 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2f01fa3b-1c70-4768-8a97-ce9f1c58da32 00:33:41.324 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f01fa3b-1c70-4768-8a97-ce9f1c58da32 00:33:41.324 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:41.586 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:41.586 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:41.586 10:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2f01fa3b-1c70-4768-8a97-ce9f1c58da32 lvol 150 00:33:41.586 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7a2b37f4-6016-4a9f-8f4d-63572cbe4aba 00:33:41.586 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:41.586 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:41.846 [2024-11-27 10:05:57.214579] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:41.847 [2024-11-27 10:05:57.214754] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:41.847 true 00:33:41.847 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f01fa3b-1c70-4768-8a97-ce9f1c58da32 00:33:41.847 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:42.109 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:42.109 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:42.370 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7a2b37f4-6016-4a9f-8f4d-63572cbe4aba 00:33:42.370 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:42.632 [2024-11-27 10:05:57.927298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.632 10:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:42.893 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4113526 00:33:42.893 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:42.893 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:42.893 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4113526 /var/tmp/bdevperf.sock 00:33:42.893 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 4113526 ']' 00:33:42.893 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:42.893 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:42.893 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:42.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:42.893 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:42.893 10:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:42.893 [2024-11-27 10:05:58.184276] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:33:42.893 [2024-11-27 10:05:58.184358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4113526 ] 00:33:42.893 [2024-11-27 10:05:58.275685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.893 [2024-11-27 10:05:58.328973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.836 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:43.836 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:33:43.837 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:43.837 Nvme0n1 00:33:43.837 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:44.098 [ 00:33:44.098 { 00:33:44.098 "name": "Nvme0n1", 00:33:44.098 "aliases": [ 00:33:44.098 "7a2b37f4-6016-4a9f-8f4d-63572cbe4aba" 00:33:44.098 ], 00:33:44.098 "product_name": "NVMe disk", 00:33:44.098 "block_size": 4096, 00:33:44.098 "num_blocks": 38912, 00:33:44.098 "uuid": "7a2b37f4-6016-4a9f-8f4d-63572cbe4aba", 00:33:44.098 "numa_id": 0, 00:33:44.098 "assigned_rate_limits": { 00:33:44.098 "rw_ios_per_sec": 0, 00:33:44.098 "rw_mbytes_per_sec": 0, 00:33:44.098 "r_mbytes_per_sec": 0, 00:33:44.098 "w_mbytes_per_sec": 0 00:33:44.098 }, 00:33:44.098 "claimed": false, 00:33:44.098 "zoned": false, 00:33:44.098 "supported_io_types": { 00:33:44.098 "read": true, 00:33:44.098 "write": true, 00:33:44.098 "unmap": true, 00:33:44.098 "flush": true, 00:33:44.098 "reset": true, 00:33:44.098 "nvme_admin": true, 00:33:44.098 "nvme_io": true, 00:33:44.098 "nvme_io_md": false, 00:33:44.098 "write_zeroes": true, 00:33:44.098 "zcopy": false, 00:33:44.098 "get_zone_info": false, 00:33:44.098 "zone_management": false, 00:33:44.098 "zone_append": false, 00:33:44.098 "compare": true, 00:33:44.098 "compare_and_write": true, 00:33:44.098 "abort": true, 00:33:44.098 "seek_hole": false, 00:33:44.098 "seek_data": false, 00:33:44.098 "copy": true, 00:33:44.098 "nvme_iov_md": false 00:33:44.098 }, 00:33:44.098 "memory_domains": [ 00:33:44.098 { 00:33:44.098 "dma_device_id": "system", 00:33:44.098 "dma_device_type": 1 00:33:44.098 } 00:33:44.098 ], 00:33:44.098 "driver_specific": { 00:33:44.098 "nvme": [ 00:33:44.098 { 00:33:44.098 "trid": { 00:33:44.098 "trtype": "TCP", 00:33:44.098 "adrfam": "IPv4", 00:33:44.098 "traddr": "10.0.0.2", 00:33:44.098 "trsvcid": "4420", 00:33:44.098 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:44.098 }, 00:33:44.098 "ctrlr_data": { 00:33:44.098 "cntlid": 1, 00:33:44.098 "vendor_id": "0x8086", 00:33:44.098 "model_number": "SPDK bdev Controller", 00:33:44.098 "serial_number": "SPDK0", 00:33:44.098 "firmware_revision": "25.01", 00:33:44.098 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:44.098 "oacs": { 00:33:44.098 "security": 0, 00:33:44.098 "format": 0, 00:33:44.098 "firmware": 0, 00:33:44.098 "ns_manage": 0 00:33:44.098 }, 00:33:44.098 "multi_ctrlr": true, 00:33:44.098 "ana_reporting": false 00:33:44.098 }, 00:33:44.098 "vs": { 00:33:44.098 "nvme_version": "1.3" 00:33:44.098 }, 00:33:44.098 "ns_data": { 00:33:44.098 "id": 1, 00:33:44.098 "can_share": true 00:33:44.098 } 00:33:44.098 } 00:33:44.098 ], 00:33:44.098 "mp_policy": "active_passive" 00:33:44.098 } 00:33:44.098 } 00:33:44.098 ] 00:33:44.098 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4113705 00:33:44.098 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:44.098 10:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:44.098 Running I/O for 10 seconds... 00:33:45.487 Latency(us) 00:33:45.487 [2024-11-27T09:06:00.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:45.487 Nvme0n1 : 1.00 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:33:45.487 [2024-11-27T09:06:00.953Z] =================================================================================================================== 00:33:45.487 [2024-11-27T09:06:00.953Z] Total : 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:33:45.487 00:33:46.060 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2f01fa3b-1c70-4768-8a97-ce9f1c58da32 00:33:46.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:46.321 Nvme0n1 : 2.00 16954.50 66.23 0.00 0.00 0.00 0.00 0.00 00:33:46.321 [2024-11-27T09:06:01.787Z] =================================================================================================================== 00:33:46.321 [2024-11-27T09:06:01.787Z] Total : 16954.50 66.23 0.00 0.00 0.00 0.00 0.00 00:33:46.321 00:33:46.321 true 00:33:46.321 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f01fa3b-1c70-4768-8a97-ce9f1c58da32 00:33:46.321 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:46.581 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:46.581 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:46.581 10:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4113705 00:33:47.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:47.154 Nvme0n1 : 3.00 17050.67 66.60 0.00 0.00 0.00 0.00 0.00 00:33:47.154 [2024-11-27T09:06:02.620Z] =================================================================================================================== 00:33:47.154 [2024-11-27T09:06:02.620Z] Total : 17050.67 66.60 0.00 0.00 0.00 0.00 0.00 00:33:47.154 00:33:48.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:48.096 Nvme0n1 : 4.00 18129.25 70.82 0.00 0.00 0.00 0.00 0.00 00:33:48.096 [2024-11-27T09:06:03.562Z] =================================================================================================================== 00:33:48.096 [2024-11-27T09:06:03.562Z] Total : 18129.25 70.82 0.00 0.00 0.00 0.00 0.00 00:33:48.096 00:33:49.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:49.483 Nvme0n1 : 5.00 19596.20 76.55 0.00 0.00 0.00 0.00 0.00 00:33:49.483 [2024-11-27T09:06:04.949Z] =================================================================================================================== 00:33:49.483 [2024-11-27T09:06:04.949Z] Total : 19596.20 76.55 0.00 0.00 0.00 0.00 0.00 00:33:49.483 00:33:50.425 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:50.425 Nvme0n1 : 6.00 20582.17 80.40 0.00 0.00 0.00 0.00 0.00 00:33:50.425 [2024-11-27T09:06:05.891Z] =================================================================================================================== 00:33:50.425 [2024-11-27T09:06:05.891Z] Total : 20582.17 80.40 0.00 0.00 0.00 0.00 0.00 00:33:50.425 00:33:51.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:51.367 Nvme0n1 : 7.00 21279.71 83.12 0.00 0.00 0.00 0.00 0.00 00:33:51.367 [2024-11-27T09:06:06.833Z] =================================================================================================================== 00:33:51.367 [2024-11-27T09:06:06.833Z] Total : 21279.71 83.12 0.00 0.00 0.00 0.00 0.00 00:33:51.367 00:33:52.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:52.308 Nvme0n1 : 8.00 21808.75 85.19 0.00 0.00 0.00 0.00 0.00 00:33:52.308 [2024-11-27T09:06:07.774Z] =================================================================================================================== 00:33:52.308 [2024-11-27T09:06:07.774Z] Total : 21808.75 85.19 0.00 0.00 0.00 0.00 0.00 00:33:52.308 00:33:53.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:53.249 Nvme0n1 : 9.00 22221.89 86.80 0.00 0.00 0.00 0.00 0.00 00:33:53.249 [2024-11-27T09:06:08.715Z] =================================================================================================================== 00:33:53.249 [2024-11-27T09:06:08.715Z] Total : 22221.89 86.80 0.00 0.00 0.00 0.00 0.00 00:33:53.249 00:33:54.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:54.189 Nvme0n1 : 10.00 22552.40 88.10 0.00 0.00 0.00 0.00 0.00 00:33:54.189 [2024-11-27T09:06:09.655Z] =================================================================================================================== 00:33:54.189 [2024-11-27T09:06:09.655Z] Total : 22552.40 88.10 0.00 0.00 0.00 0.00 0.00 00:33:54.189 00:33:54.189 00:33:54.189 Latency(us) 00:33:54.189 [2024-11-27T09:06:09.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:54.189 Nvme0n1 : 10.01 22552.55 88.10 0.00 0.00 5672.58 2880.85 32112.64 00:33:54.189 [2024-11-27T09:06:09.655Z] =================================================================================================================== 00:33:54.189 [2024-11-27T09:06:09.655Z] Total : 22552.55 88.10 0.00 0.00 5672.58 2880.85 32112.64 00:33:54.189 { 00:33:54.189 "results": [ 00:33:54.189 { 00:33:54.189 "job": "Nvme0n1", 00:33:54.189 "core_mask": "0x2", 00:33:54.189 "workload": "randwrite", 00:33:54.189 "status": "finished", 00:33:54.189 "queue_depth": 128, 00:33:54.189 "io_size": 4096, 00:33:54.189 "runtime": 10.005608, 00:33:54.189 "iops": 22552.552528541994, 00:33:54.189 "mibps": 88.09590831461716, 00:33:54.189 "io_failed": 0, 00:33:54.189 "io_timeout": 0, 00:33:54.189 "avg_latency_us": 5672.577782662389, 00:33:54.189 "min_latency_us": 2880.8533333333335, 00:33:54.189 "max_latency_us": 32112.64 00:33:54.189 } 00:33:54.189 ], 00:33:54.189 "core_count": 1 00:33:54.189 } 00:33:54.189 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4113526 00:33:54.189 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 4113526 ']' 00:33:54.189 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 4113526 00:33:54.189 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:33:54.189 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:54.189 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4113526 00:33:54.450 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:54.450 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:54.450 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4113526' 00:33:54.450 killing process with pid 4113526 00:33:54.450 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 4113526 00:33:54.450 Received shutdown signal, test time was about 10.000000 seconds 00:33:54.450 00:33:54.450 Latency(us) 00:33:54.450 [2024-11-27T09:06:09.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.450 [2024-11-27T09:06:09.916Z] =================================================================================================================== 00:33:54.450 [2024-11-27T09:06:09.916Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:54.450 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 4113526 00:33:54.450 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:54.710 10:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:54.710 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f01fa3b-1c70-4768-8a97-ce9f1c58da32 00:33:54.710 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:54.971 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:54.972 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:54.972 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:55.233 [2024-11-27 10:06:10.482660] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:55.233 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f01fa3b-1c70-4768-8a97-ce9f1c58da32 00:33:55.233 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:33:55.233 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f01fa3b-1c70-4768-8a97-ce9f1c58da32 00:33:55.233 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:55.233 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:55.233 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:55.233 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:55.233 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:55.233 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:55.233 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:55.233 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:55.233 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f01fa3b-1c70-4768-8a97-ce9f1c58da32 00:33:55.495 request: 00:33:55.495 { 00:33:55.495 "uuid": "2f01fa3b-1c70-4768-8a97-ce9f1c58da32", 00:33:55.495 "method": "bdev_lvol_get_lvstores", 00:33:55.495 "req_id": 1 00:33:55.495 } 00:33:55.495 Got JSON-RPC error response 00:33:55.495 response: 00:33:55.495 { 00:33:55.495 "code": -19, 00:33:55.495 "message": "No such device" 00:33:55.495 } 00:33:55.495 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:33:55.495 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:55.495 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:55.495 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:55.495 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:55.495 aio_bdev 00:33:55.495 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7a2b37f4-6016-4a9f-8f4d-63572cbe4aba 00:33:55.495 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7a2b37f4-6016-4a9f-8f4d-63572cbe4aba 00:33:55.495 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:55.496 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:33:55.496 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:55.496 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:55.496 10:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:55.757 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7a2b37f4-6016-4a9f-8f4d-63572cbe4aba -t 2000 00:33:56.018 [ 00:33:56.018 { 00:33:56.018 "name": "7a2b37f4-6016-4a9f-8f4d-63572cbe4aba", 00:33:56.018 "aliases": [ 00:33:56.018 "lvs/lvol" 00:33:56.018 ], 00:33:56.018 "product_name": "Logical Volume", 00:33:56.018 "block_size": 4096, 00:33:56.018 "num_blocks": 38912, 00:33:56.018 "uuid": "7a2b37f4-6016-4a9f-8f4d-63572cbe4aba", 00:33:56.018 "assigned_rate_limits": { 00:33:56.018 "rw_ios_per_sec": 0, 00:33:56.018 "rw_mbytes_per_sec": 0, 00:33:56.018 "r_mbytes_per_sec": 0, 00:33:56.018 "w_mbytes_per_sec": 0 00:33:56.018 }, 00:33:56.018 "claimed": false, 00:33:56.018 "zoned": false, 00:33:56.018 "supported_io_types": { 00:33:56.018 "read": true, 00:33:56.018 "write": true, 00:33:56.018 "unmap": true, 00:33:56.018 "flush": false, 00:33:56.018 "reset": true, 00:33:56.018 "nvme_admin": false, 00:33:56.018 "nvme_io": false, 00:33:56.018 "nvme_io_md": false, 00:33:56.018 "write_zeroes": true, 00:33:56.018 "zcopy": false, 00:33:56.018 "get_zone_info": false, 00:33:56.018 "zone_management": false, 00:33:56.018 "zone_append": false, 00:33:56.018 "compare": false, 00:33:56.018 "compare_and_write": false, 00:33:56.018 "abort": false, 00:33:56.018 "seek_hole": true, 00:33:56.018 "seek_data": true, 00:33:56.018 "copy": false, 00:33:56.018 "nvme_iov_md": false 00:33:56.018 }, 00:33:56.018 "driver_specific": { 00:33:56.018 "lvol": { 00:33:56.018 "lvol_store_uuid": "2f01fa3b-1c70-4768-8a97-ce9f1c58da32", 00:33:56.018 "base_bdev": "aio_bdev", 00:33:56.018 "thin_provision": false, 00:33:56.018 "num_allocated_clusters": 38, 00:33:56.018 "snapshot": false, 00:33:56.018 "clone": false, 00:33:56.018 "esnap_clone": false 00:33:56.018 } 00:33:56.018 } 00:33:56.018 } 00:33:56.018 ] 00:33:56.018 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:33:56.018 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f01fa3b-1c70-4768-8a97-ce9f1c58da32 00:33:56.018 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:56.018 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:56.018 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f01fa3b-1c70-4768-8a97-ce9f1c58da32 00:33:56.018 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:56.280 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:56.280 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7a2b37f4-6016-4a9f-8f4d-63572cbe4aba 00:33:56.542 10:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2f01fa3b-1c70-4768-8a97-ce9f1c58da32 00:33:56.803 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:56.803 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:57.065 00:33:57.065 real 0m16.056s 00:33:57.065 user 0m15.640s 00:33:57.065 sys 0m1.519s 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:57.065 ************************************ 00:33:57.065 END TEST lvs_grow_clean 00:33:57.065 ************************************ 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:57.065 ************************************ 00:33:57.065 START TEST lvs_grow_dirty 00:33:57.065 ************************************ 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:57.065 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:57.326 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:57.326 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:57.587 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:33:57.587 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:33:57.587 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:57.587 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:57.587 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:57.587 10:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 lvol 150 00:33:57.848 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9bbd1e41-e931-4bc2-a95d-2dfd38993962 00:33:57.848 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:57.848 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:57.848 [2024-11-27 10:06:13.294551] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:57.848 [2024-11-27 10:06:13.294708] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:57.848 true 00:33:57.848 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:33:57.848 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:58.110 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:58.110 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:58.371 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9bbd1e41-e931-4bc2-a95d-2dfd38993962 00:33:58.371 10:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:58.633 [2024-11-27 10:06:13.995234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.633 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:58.894 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4117167 00:33:58.894 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:58.894 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:58.894 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4117167 /var/tmp/bdevperf.sock 00:33:58.894 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4117167 ']' 00:33:58.894 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:58.894 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:58.894 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:58.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:58.894 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:58.894 10:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:58.895 [2024-11-27 10:06:14.260601] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:33:58.895 [2024-11-27 10:06:14.260690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4117167 ] 00:33:58.895 [2024-11-27 10:06:14.350891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.155 [2024-11-27 10:06:14.384791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.725 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:59.725 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:59.725 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:59.985 Nvme0n1 00:33:59.985 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:00.245 [ 00:34:00.245 { 00:34:00.245 "name": "Nvme0n1", 00:34:00.245 "aliases": [ 00:34:00.245 "9bbd1e41-e931-4bc2-a95d-2dfd38993962" 00:34:00.245 ], 00:34:00.245 "product_name": "NVMe disk", 00:34:00.245 "block_size": 4096, 00:34:00.245 "num_blocks": 38912, 00:34:00.245 "uuid": "9bbd1e41-e931-4bc2-a95d-2dfd38993962", 00:34:00.245 "numa_id": 0, 00:34:00.245 "assigned_rate_limits": { 00:34:00.245 "rw_ios_per_sec": 0, 00:34:00.245 "rw_mbytes_per_sec": 0, 00:34:00.245 "r_mbytes_per_sec": 0, 00:34:00.245 "w_mbytes_per_sec": 0 00:34:00.245 }, 00:34:00.245 "claimed": false, 00:34:00.245 "zoned": false, 00:34:00.245 "supported_io_types": { 00:34:00.245 "read": true, 00:34:00.245 "write": true, 00:34:00.245 "unmap": true, 00:34:00.245 "flush": true, 00:34:00.245 "reset": true, 00:34:00.245 "nvme_admin": true, 00:34:00.245 "nvme_io": true, 00:34:00.245 "nvme_io_md": false, 00:34:00.245 "write_zeroes": true, 00:34:00.245 "zcopy": false, 00:34:00.245 "get_zone_info": false, 00:34:00.245 "zone_management": false, 00:34:00.245 "zone_append": false, 00:34:00.245 "compare": true, 00:34:00.245 "compare_and_write": true, 00:34:00.245 "abort": true, 00:34:00.245 "seek_hole": false, 00:34:00.245 "seek_data": false, 00:34:00.245 "copy": true, 00:34:00.245 "nvme_iov_md": false 00:34:00.245 }, 00:34:00.245 "memory_domains": [ 00:34:00.245 { 00:34:00.245 "dma_device_id": "system", 00:34:00.245 "dma_device_type": 1 00:34:00.245 } 00:34:00.245 ], 00:34:00.245 "driver_specific": { 00:34:00.245 "nvme": [ 00:34:00.245 { 00:34:00.245 "trid": { 00:34:00.245 "trtype": "TCP", 00:34:00.245 "adrfam": "IPv4", 00:34:00.245 "traddr": "10.0.0.2", 00:34:00.245 "trsvcid": "4420", 00:34:00.245 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:00.245 }, 00:34:00.245 "ctrlr_data": { 00:34:00.245 "cntlid": 1, 00:34:00.245 "vendor_id": "0x8086", 00:34:00.245 "model_number": "SPDK bdev Controller", 00:34:00.245 "serial_number": "SPDK0", 00:34:00.245 "firmware_revision": "25.01", 00:34:00.245 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:00.245 "oacs": { 00:34:00.245 "security": 0, 00:34:00.245 "format": 0, 00:34:00.245 "firmware": 0, 00:34:00.245 "ns_manage": 0 00:34:00.245 }, 00:34:00.245 "multi_ctrlr": true, 00:34:00.245 "ana_reporting": false 00:34:00.245 }, 00:34:00.245 "vs": { 00:34:00.245 "nvme_version": "1.3" 00:34:00.245 }, 00:34:00.245 "ns_data": { 00:34:00.245 "id": 1, 00:34:00.245 "can_share": true 00:34:00.245 } 00:34:00.245 } 00:34:00.245 ], 00:34:00.245 "mp_policy": "active_passive" 00:34:00.245 } 00:34:00.245 } 00:34:00.245 ] 00:34:00.245 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4117323 00:34:00.245 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:00.245 10:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:00.245 Running I/O for 10 seconds... 00:34:01.625 Latency(us) 00:34:01.625 [2024-11-27T09:06:17.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:01.625 Nvme0n1 : 1.00 17536.00 68.50 0.00 0.00 0.00 0.00 0.00 00:34:01.625 [2024-11-27T09:06:17.091Z] =================================================================================================================== 00:34:01.625 [2024-11-27T09:06:17.091Z] Total : 17536.00 68.50 0.00 0.00 0.00 0.00 0.00 00:34:01.625 00:34:02.194 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:34:02.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:02.454 Nvme0n1 : 2.00 17785.00 69.47 0.00 0.00 0.00 0.00 0.00 00:34:02.454 [2024-11-27T09:06:17.920Z] =================================================================================================================== 00:34:02.454 [2024-11-27T09:06:17.920Z] Total : 17785.00 69.47 0.00 0.00 0.00 0.00 0.00 00:34:02.454 00:34:02.454 true 00:34:02.454 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:34:02.454 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:02.713 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:02.713 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:02.713 10:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4117323 00:34:03.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:03.283 Nvme0n1 : 3.00 17868.00 69.80 0.00 0.00 0.00 0.00 0.00 00:34:03.283 [2024-11-27T09:06:18.749Z] =================================================================================================================== 00:34:03.283 [2024-11-27T09:06:18.749Z] Total : 17868.00 69.80 0.00 0.00 0.00 0.00 0.00 00:34:03.283 00:34:04.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:04.223 Nvme0n1 : 4.00 17909.50 69.96 0.00 0.00 0.00 0.00 0.00 00:34:04.223 [2024-11-27T09:06:19.689Z] =================================================================================================================== 00:34:04.223 [2024-11-27T09:06:19.689Z] Total : 17909.50 69.96 0.00 0.00 0.00 0.00 0.00 00:34:04.223 00:34:05.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:05.604 Nvme0n1 : 5.00 18594.80 72.64 0.00 0.00 0.00 0.00 0.00 00:34:05.604 [2024-11-27T09:06:21.070Z] =================================================================================================================== 00:34:05.604 [2024-11-27T09:06:21.070Z] Total : 18594.80 72.64 0.00 0.00 0.00 0.00 0.00 00:34:05.604 00:34:06.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:06.547 Nvme0n1 : 6.00 19739.67 77.11 0.00 0.00 0.00 0.00 0.00 00:34:06.547 [2024-11-27T09:06:22.013Z] =================================================================================================================== 00:34:06.547 [2024-11-27T09:06:22.013Z] Total : 19739.67 77.11 0.00 0.00 0.00 0.00 0.00 00:34:06.547 00:34:07.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:07.486 Nvme0n1 : 7.00 20564.29 80.33 0.00 0.00 0.00 0.00 0.00 00:34:07.486 [2024-11-27T09:06:22.952Z] =================================================================================================================== 00:34:07.486 [2024-11-27T09:06:22.952Z] Total : 20564.29 80.33 0.00 0.00 0.00 0.00 0.00 00:34:07.486 00:34:08.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:08.454 Nvme0n1 : 8.00 21184.62 82.75 0.00 0.00 0.00 0.00 0.00 00:34:08.454 [2024-11-27T09:06:23.920Z] =================================================================================================================== 00:34:08.454 [2024-11-27T09:06:23.920Z] Total : 21184.62 82.75 0.00 0.00 0.00 0.00 0.00 00:34:08.454 00:34:09.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:09.395 Nvme0n1 : 9.00 21667.11 84.64 0.00 0.00 0.00 0.00 0.00 00:34:09.395 [2024-11-27T09:06:24.861Z] =================================================================================================================== 00:34:09.395 [2024-11-27T09:06:24.861Z] Total : 21667.11 84.64 0.00 0.00 0.00 0.00 0.00 00:34:09.395 00:34:10.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:10.337 Nvme0n1 : 10.00 22053.10 86.14 0.00 0.00 0.00 0.00 0.00 00:34:10.337 [2024-11-27T09:06:25.803Z] =================================================================================================================== 00:34:10.337 [2024-11-27T09:06:25.803Z] Total : 22053.10 86.14 0.00 0.00 0.00 0.00 0.00 00:34:10.337 00:34:10.337 00:34:10.337 Latency(us) 00:34:10.337 [2024-11-27T09:06:25.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:10.337 Nvme0n1 : 10.00 22056.21 86.16 0.00 0.00 5800.80 2867.20 29272.75 00:34:10.338 [2024-11-27T09:06:25.804Z] =================================================================================================================== 00:34:10.338 [2024-11-27T09:06:25.804Z] Total : 22056.21 86.16 0.00 0.00 5800.80 2867.20 29272.75 00:34:10.338 { 00:34:10.338 "results": [ 00:34:10.338 { 00:34:10.338 "job": "Nvme0n1", 00:34:10.338 "core_mask": "0x2", 00:34:10.338 "workload": "randwrite", 00:34:10.338 "status": "finished", 00:34:10.338 "queue_depth": 128, 00:34:10.338 "io_size": 4096, 00:34:10.338 "runtime": 10.004395, 00:34:10.338 "iops": 22056.206297332323, 00:34:10.338 "mibps": 86.15705584895439, 00:34:10.338 "io_failed": 0, 00:34:10.338 "io_timeout": 0, 00:34:10.338 "avg_latency_us": 5800.798343144853, 00:34:10.338 "min_latency_us": 2867.2, 00:34:10.338 "max_latency_us": 29272.746666666666 00:34:10.338 } 00:34:10.338 ], 00:34:10.338 "core_count": 1 00:34:10.338 } 00:34:10.338 10:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4117167 00:34:10.338 10:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 4117167 ']' 00:34:10.338 10:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 4117167 00:34:10.338 10:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:34:10.338 10:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.338 10:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4117167 00:34:10.338 10:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:10.338 10:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:10.338 10:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4117167' 00:34:10.338 killing process with pid 4117167 00:34:10.338 10:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 4117167 00:34:10.338 Received shutdown signal, test time was about 10.000000 seconds 00:34:10.338 00:34:10.338 Latency(us) 00:34:10.338 [2024-11-27T09:06:25.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.338 [2024-11-27T09:06:25.804Z] =================================================================================================================== 00:34:10.338 [2024-11-27T09:06:25.804Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:10.338 10:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 4117167 00:34:10.599 10:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:10.860 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:10.860 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:34:10.860 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4112819 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4112819 00:34:11.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4112819 Killed "${NVMF_APP[@]}" "$@" 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4119453 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4119453 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4119453 ']' 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.122 10:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:11.384 [2024-11-27 10:06:26.621344] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:11.384 [2024-11-27 10:06:26.622458] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:34:11.384 [2024-11-27 10:06:26.622512] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.384 [2024-11-27 10:06:26.718555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.384 [2024-11-27 10:06:26.751910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.384 [2024-11-27 10:06:26.751941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.384 [2024-11-27 10:06:26.751948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.384 [2024-11-27 10:06:26.751952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.384 [2024-11-27 10:06:26.751956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.384 [2024-11-27 10:06:26.752466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.384 [2024-11-27 10:06:26.804368] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:11.384 [2024-11-27 10:06:26.804561] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:12.329 [2024-11-27 10:06:27.634661] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:12.329 [2024-11-27 10:06:27.634894] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:12.329 [2024-11-27 10:06:27.634983] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9bbd1e41-e931-4bc2-a95d-2dfd38993962 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9bbd1e41-e931-4bc2-a95d-2dfd38993962 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:12.329 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:12.591 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9bbd1e41-e931-4bc2-a95d-2dfd38993962 -t 2000 00:34:12.591 [ 00:34:12.591 { 00:34:12.591 "name": "9bbd1e41-e931-4bc2-a95d-2dfd38993962", 00:34:12.591 "aliases": [ 00:34:12.591 "lvs/lvol" 00:34:12.591 ], 00:34:12.591 "product_name": "Logical Volume", 00:34:12.591 "block_size": 4096, 00:34:12.591 "num_blocks": 38912, 00:34:12.591 "uuid": "9bbd1e41-e931-4bc2-a95d-2dfd38993962", 00:34:12.591 "assigned_rate_limits": { 00:34:12.591 "rw_ios_per_sec": 0, 00:34:12.591 "rw_mbytes_per_sec": 0, 00:34:12.591 "r_mbytes_per_sec": 0, 00:34:12.591 "w_mbytes_per_sec": 0 00:34:12.591 }, 00:34:12.591 "claimed": false, 00:34:12.591 "zoned": false, 00:34:12.591 "supported_io_types": { 00:34:12.591 "read": true, 00:34:12.591 "write": true, 00:34:12.591 "unmap": true, 00:34:12.591 "flush": false, 00:34:12.591 "reset": true, 00:34:12.591 "nvme_admin": false, 00:34:12.591 "nvme_io": false, 00:34:12.591 "nvme_io_md": false, 00:34:12.591 "write_zeroes": true, 00:34:12.591 "zcopy": false, 00:34:12.591 "get_zone_info": false, 00:34:12.591 "zone_management": false, 00:34:12.591 "zone_append": false, 00:34:12.591 "compare": false, 00:34:12.591 "compare_and_write": false, 00:34:12.591 "abort": false, 00:34:12.591 "seek_hole": true, 00:34:12.591 "seek_data": true, 00:34:12.591 "copy": false, 00:34:12.591 "nvme_iov_md": false 00:34:12.591 }, 00:34:12.591 "driver_specific": { 00:34:12.591 "lvol": { 00:34:12.591 "lvol_store_uuid": "1d77f1b4-7e22-4240-93a1-bd5685ec8c11", 00:34:12.591 "base_bdev": "aio_bdev", 00:34:12.591 "thin_provision": false, 00:34:12.591 "num_allocated_clusters": 38, 00:34:12.591 "snapshot": false, 00:34:12.591 "clone": false, 00:34:12.591 "esnap_clone": false 00:34:12.591 } 00:34:12.591 } 00:34:12.591 } 00:34:12.591 ] 00:34:12.591 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:12.591 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:34:12.591 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:12.852 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:12.852 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:34:12.852 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:13.113 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:13.113 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:13.113 [2024-11-27 10:06:28.532960] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:34:13.375 request: 00:34:13.375 { 00:34:13.375 "uuid": "1d77f1b4-7e22-4240-93a1-bd5685ec8c11", 00:34:13.375 "method": "bdev_lvol_get_lvstores", 00:34:13.375 "req_id": 1 00:34:13.375 } 00:34:13.375 Got JSON-RPC error response 00:34:13.375 response: 00:34:13.375 { 00:34:13.375 "code": -19, 00:34:13.375 "message": "No such device" 00:34:13.375 } 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:13.375 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:13.636 aio_bdev 00:34:13.636 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9bbd1e41-e931-4bc2-a95d-2dfd38993962 00:34:13.636 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9bbd1e41-e931-4bc2-a95d-2dfd38993962 00:34:13.636 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:13.636 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:13.636 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:13.636 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:13.636 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:13.899 10:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9bbd1e41-e931-4bc2-a95d-2dfd38993962 -t 2000 00:34:13.899 [ 00:34:13.899 { 00:34:13.899 "name": "9bbd1e41-e931-4bc2-a95d-2dfd38993962", 00:34:13.899 "aliases": [ 00:34:13.899 "lvs/lvol" 00:34:13.899 ], 00:34:13.899 "product_name": "Logical Volume", 00:34:13.899 "block_size": 4096, 00:34:13.899 "num_blocks": 38912, 00:34:13.899 "uuid": "9bbd1e41-e931-4bc2-a95d-2dfd38993962", 00:34:13.899 "assigned_rate_limits": { 00:34:13.899 "rw_ios_per_sec": 0, 00:34:13.899 "rw_mbytes_per_sec": 0, 00:34:13.899 "r_mbytes_per_sec": 0, 00:34:13.899 "w_mbytes_per_sec": 0 00:34:13.899 }, 00:34:13.899 "claimed": false, 00:34:13.899 "zoned": false, 00:34:13.899 "supported_io_types": { 00:34:13.899 "read": true, 00:34:13.899 "write": true, 00:34:13.899 "unmap": true, 00:34:13.899 "flush": false, 00:34:13.899 "reset": true, 00:34:13.899 "nvme_admin": false, 00:34:13.899 "nvme_io": false, 00:34:13.899 "nvme_io_md": false, 00:34:13.899 "write_zeroes": true, 00:34:13.899 "zcopy": false, 00:34:13.899 "get_zone_info": false, 00:34:13.899 "zone_management": false, 00:34:13.899 "zone_append": false, 00:34:13.899 "compare": false, 00:34:13.899 "compare_and_write": false, 00:34:13.899 "abort": false, 00:34:13.899 "seek_hole": true, 00:34:13.899 "seek_data": true, 00:34:13.899 "copy": false, 00:34:13.899 "nvme_iov_md": false 00:34:13.899 }, 00:34:13.899 "driver_specific": { 00:34:13.899 "lvol": { 00:34:13.899 "lvol_store_uuid": "1d77f1b4-7e22-4240-93a1-bd5685ec8c11", 00:34:13.899 "base_bdev": "aio_bdev", 00:34:13.899 "thin_provision": false, 00:34:13.899 "num_allocated_clusters": 38, 00:34:13.899 "snapshot": false, 00:34:13.899 "clone": false, 00:34:13.900 "esnap_clone": false 00:34:13.900 } 00:34:13.900 } 00:34:13.900 } 00:34:13.900 ] 00:34:13.900 10:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:13.900 10:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:34:13.900 10:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:14.161 10:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:14.161 10:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:34:14.161 10:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:14.161 10:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:14.161 10:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9bbd1e41-e931-4bc2-a95d-2dfd38993962 00:34:14.421 10:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1d77f1b4-7e22-4240-93a1-bd5685ec8c11 00:34:14.682 10:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:14.682 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:14.944 00:34:14.944 real 0m17.797s 00:34:14.944 user 0m35.627s 00:34:14.944 sys 0m3.165s 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:14.944 ************************************ 00:34:14.944 END TEST lvs_grow_dirty 00:34:14.944 ************************************ 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:14.944 nvmf_trace.0 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:14.944 rmmod nvme_tcp 00:34:14.944 rmmod nvme_fabrics 00:34:14.944 rmmod nvme_keyring 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4119453 ']' 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4119453 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 4119453 ']' 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 4119453 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.944 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4119453 00:34:15.205 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:15.205 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:15.205 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4119453' 00:34:15.205 killing process with pid 4119453 00:34:15.205 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 4119453 00:34:15.206 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 4119453 00:34:15.206 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:15.206 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:15.206 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:15.206 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:15.206 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:34:15.206 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:15.206 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:34:15.206 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:15.206 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:15.206 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.206 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.206 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.196 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:17.196 00:34:17.196 real 0m45.270s 00:34:17.196 user 0m54.343s 00:34:17.196 sys 0m10.760s 00:34:17.196 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.196 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:17.196 ************************************ 00:34:17.196 END TEST nvmf_lvs_grow 00:34:17.196 ************************************ 00:34:17.456 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:17.456 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:17.456 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.456 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:17.456 ************************************ 00:34:17.456 START TEST nvmf_bdev_io_wait 00:34:17.456 ************************************ 00:34:17.456 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:17.456 * Looking for test storage... 00:34:17.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:17.457 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:17.717 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:17.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.718 --rc genhtml_branch_coverage=1 00:34:17.718 --rc genhtml_function_coverage=1 00:34:17.718 --rc genhtml_legend=1 00:34:17.718 --rc geninfo_all_blocks=1 00:34:17.718 --rc geninfo_unexecuted_blocks=1 00:34:17.718 00:34:17.718 ' 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:17.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.718 --rc genhtml_branch_coverage=1 00:34:17.718 --rc genhtml_function_coverage=1 00:34:17.718 --rc genhtml_legend=1 00:34:17.718 --rc geninfo_all_blocks=1 00:34:17.718 --rc geninfo_unexecuted_blocks=1 00:34:17.718 00:34:17.718 ' 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:17.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.718 --rc genhtml_branch_coverage=1 00:34:17.718 --rc genhtml_function_coverage=1 00:34:17.718 --rc genhtml_legend=1 00:34:17.718 --rc geninfo_all_blocks=1 00:34:17.718 --rc geninfo_unexecuted_blocks=1 00:34:17.718 00:34:17.718 ' 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:17.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.718 --rc genhtml_branch_coverage=1 00:34:17.718 --rc genhtml_function_coverage=1 00:34:17.718 --rc genhtml_legend=1 00:34:17.718 --rc geninfo_all_blocks=1 00:34:17.718 --rc geninfo_unexecuted_blocks=1 00:34:17.718 00:34:17.718 ' 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:34:17.718 10:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:25.862 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:25.863 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:25.863 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:25.863 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:25.863 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:25.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:25.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:34:25.863 00:34:25.863 --- 10.0.0.2 ping statistics --- 00:34:25.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.863 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:25.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:25.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:34:25.863 00:34:25.863 --- 10.0.0.1 ping statistics --- 00:34:25.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.863 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.863 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=4124270 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 4124270 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 4124270 ']' 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:25.864 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:25.864 [2024-11-27 10:06:40.540398] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:25.864 [2024-11-27 10:06:40.541508] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:34:25.864 [2024-11-27 10:06:40.541557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:25.864 [2024-11-27 10:06:40.644143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:25.864 [2024-11-27 10:06:40.698851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:25.864 [2024-11-27 10:06:40.698901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:25.864 [2024-11-27 10:06:40.698909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:25.864 [2024-11-27 10:06:40.698916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:25.864 [2024-11-27 10:06:40.698923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:25.864 [2024-11-27 10:06:40.701131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:25.864 [2024-11-27 10:06:40.701299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:25.864 [2024-11-27 10:06:40.701177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:25.864 [2024-11-27 10:06:40.701434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.864 [2024-11-27 10:06:40.701991] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:26.125 [2024-11-27 10:06:41.466433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:26.125 [2024-11-27 10:06:41.466979] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:26.125 [2024-11-27 10:06:41.466999] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:26.125 [2024-11-27 10:06:41.467216] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:26.125 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:26.126 [2024-11-27 10:06:41.478217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:26.126 Malloc0 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:26.126 [2024-11-27 10:06:41.550711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4124619 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4124621 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:26.126 { 00:34:26.126 "params": { 00:34:26.126 "name": "Nvme$subsystem", 00:34:26.126 "trtype": "$TEST_TRANSPORT", 00:34:26.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.126 "adrfam": "ipv4", 00:34:26.126 "trsvcid": "$NVMF_PORT", 00:34:26.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.126 "hdgst": ${hdgst:-false}, 00:34:26.126 "ddgst": ${ddgst:-false} 00:34:26.126 }, 00:34:26.126 "method": "bdev_nvme_attach_controller" 00:34:26.126 } 00:34:26.126 EOF 00:34:26.126 )") 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4124623 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:26.126 { 00:34:26.126 "params": { 00:34:26.126 "name": "Nvme$subsystem", 00:34:26.126 "trtype": "$TEST_TRANSPORT", 00:34:26.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.126 "adrfam": "ipv4", 00:34:26.126 "trsvcid": "$NVMF_PORT", 00:34:26.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.126 "hdgst": ${hdgst:-false}, 00:34:26.126 "ddgst": ${ddgst:-false} 00:34:26.126 }, 00:34:26.126 "method": "bdev_nvme_attach_controller" 00:34:26.126 } 00:34:26.126 EOF 00:34:26.126 )") 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4124626 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:26.126 { 00:34:26.126 "params": { 00:34:26.126 "name": "Nvme$subsystem", 00:34:26.126 "trtype": "$TEST_TRANSPORT", 00:34:26.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.126 "adrfam": "ipv4", 00:34:26.126 "trsvcid": "$NVMF_PORT", 00:34:26.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.126 "hdgst": ${hdgst:-false}, 00:34:26.126 "ddgst": ${ddgst:-false} 00:34:26.126 }, 00:34:26.126 "method": "bdev_nvme_attach_controller" 00:34:26.126 } 00:34:26.126 EOF 00:34:26.126 )") 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:26.126 { 00:34:26.126 "params": { 00:34:26.126 "name": "Nvme$subsystem", 00:34:26.126 "trtype": "$TEST_TRANSPORT", 00:34:26.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.126 "adrfam": "ipv4", 00:34:26.126 "trsvcid": "$NVMF_PORT", 00:34:26.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.126 "hdgst": ${hdgst:-false}, 00:34:26.126 "ddgst": ${ddgst:-false} 00:34:26.126 }, 00:34:26.126 "method": "bdev_nvme_attach_controller" 00:34:26.126 } 00:34:26.126 EOF 00:34:26.126 )") 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4124619 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:26.126 "params": { 00:34:26.126 "name": "Nvme1", 00:34:26.126 "trtype": "tcp", 00:34:26.126 "traddr": "10.0.0.2", 00:34:26.126 "adrfam": "ipv4", 00:34:26.126 "trsvcid": "4420", 00:34:26.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:26.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:26.126 "hdgst": false, 00:34:26.126 "ddgst": false 00:34:26.126 }, 00:34:26.126 "method": "bdev_nvme_attach_controller" 00:34:26.126 }' 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:26.126 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:26.126 "params": { 00:34:26.126 "name": "Nvme1", 00:34:26.126 "trtype": "tcp", 00:34:26.126 "traddr": "10.0.0.2", 00:34:26.126 "adrfam": "ipv4", 00:34:26.126 "trsvcid": "4420", 00:34:26.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:26.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:26.127 "hdgst": false, 00:34:26.127 "ddgst": false 00:34:26.127 }, 00:34:26.127 "method": "bdev_nvme_attach_controller" 00:34:26.127 }' 00:34:26.127 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:26.127 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:26.127 "params": { 00:34:26.127 "name": "Nvme1", 00:34:26.127 "trtype": "tcp", 00:34:26.127 "traddr": "10.0.0.2", 00:34:26.127 "adrfam": "ipv4", 00:34:26.127 "trsvcid": "4420", 00:34:26.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:26.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:26.127 "hdgst": false, 00:34:26.127 "ddgst": false 00:34:26.127 }, 00:34:26.127 "method": "bdev_nvme_attach_controller" 00:34:26.127 }' 00:34:26.127 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:26.127 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:26.127 "params": { 00:34:26.127 "name": "Nvme1", 00:34:26.127 "trtype": "tcp", 00:34:26.127 "traddr": "10.0.0.2", 00:34:26.127 "adrfam": "ipv4", 00:34:26.127 "trsvcid": "4420", 00:34:26.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:26.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:26.127 "hdgst": false, 00:34:26.127 "ddgst": false 00:34:26.127 }, 00:34:26.127 "method": "bdev_nvme_attach_controller" 00:34:26.127 }' 00:34:26.388 [2024-11-27 10:06:41.611565] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:34:26.388 [2024-11-27 10:06:41.611640] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:26.388 [2024-11-27 10:06:41.613439] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:34:26.388 [2024-11-27 10:06:41.613507] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:26.388 [2024-11-27 10:06:41.614133] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:34:26.388 [2024-11-27 10:06:41.614207] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:26.388 [2024-11-27 10:06:41.620020] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:34:26.388 [2024-11-27 10:06:41.620083] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:26.388 [2024-11-27 10:06:41.826365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.648 [2024-11-27 10:06:41.866453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:26.648 [2024-11-27 10:06:41.919498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.648 [2024-11-27 10:06:41.959256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:26.648 [2024-11-27 10:06:42.013363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.649 [2024-11-27 10:06:42.055461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:26.649 [2024-11-27 10:06:42.081489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.909 [2024-11-27 10:06:42.120217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:26.909 Running I/O for 1 seconds... 00:34:26.909 Running I/O for 1 seconds... 00:34:26.909 Running I/O for 1 seconds... 00:34:26.909 Running I/O for 1 seconds... 00:34:27.852 180504.00 IOPS, 705.09 MiB/s 00:34:27.852 Latency(us) 00:34:27.852 [2024-11-27T09:06:43.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.852 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:27.852 Nvme1n1 : 1.00 180146.66 703.70 0.00 0.00 706.43 307.20 1979.73 00:34:27.852 [2024-11-27T09:06:43.318Z] =================================================================================================================== 00:34:27.852 [2024-11-27T09:06:43.318Z] Total : 180146.66 703.70 0.00 0.00 706.43 307.20 1979.73 00:34:28.113 6892.00 IOPS, 26.92 MiB/s 00:34:28.113 Latency(us) 00:34:28.113 [2024-11-27T09:06:43.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:28.113 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:28.113 Nvme1n1 : 1.02 6892.32 26.92 0.00 0.00 18410.82 5652.48 25449.81 00:34:28.113 [2024-11-27T09:06:43.579Z] =================================================================================================================== 00:34:28.113 [2024-11-27T09:06:43.579Z] Total : 6892.32 26.92 0.00 0.00 18410.82 5652.48 25449.81 00:34:28.113 11687.00 IOPS, 45.65 MiB/s [2024-11-27T09:06:43.579Z] 6820.00 IOPS, 26.64 MiB/s 00:34:28.113 Latency(us) 00:34:28.113 [2024-11-27T09:06:43.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:28.113 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:28.113 Nvme1n1 : 1.01 11741.02 45.86 0.00 0.00 10862.00 2184.53 17367.04 00:34:28.113 [2024-11-27T09:06:43.579Z] =================================================================================================================== 00:34:28.113 [2024-11-27T09:06:43.579Z] Total : 11741.02 45.86 0.00 0.00 10862.00 2184.53 17367.04 00:34:28.113 00:34:28.113 Latency(us) 00:34:28.113 [2024-11-27T09:06:43.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:28.113 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:28.113 Nvme1n1 : 1.01 6941.89 27.12 0.00 0.00 18394.43 3741.01 33860.27 00:34:28.113 [2024-11-27T09:06:43.579Z] =================================================================================================================== 00:34:28.113 [2024-11-27T09:06:43.579Z] Total : 6941.89 27.12 0.00 0.00 18394.43 3741.01 33860.27 00:34:28.113 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4124621 00:34:28.113 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4124623 00:34:28.113 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4124626 00:34:28.113 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:28.113 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.113 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:28.113 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:28.114 rmmod nvme_tcp 00:34:28.114 rmmod nvme_fabrics 00:34:28.114 rmmod nvme_keyring 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 4124270 ']' 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 4124270 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 4124270 ']' 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 4124270 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:28.114 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4124270 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4124270' 00:34:28.375 killing process with pid 4124270 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 4124270 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 4124270 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.375 10:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.920 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:30.920 00:34:30.920 real 0m13.127s 00:34:30.920 user 0m16.382s 00:34:30.920 sys 0m7.547s 00:34:30.920 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:30.920 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:30.920 ************************************ 00:34:30.920 END TEST nvmf_bdev_io_wait 00:34:30.920 ************************************ 00:34:30.920 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:30.920 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:30.920 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:30.920 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:30.920 ************************************ 00:34:30.920 START TEST nvmf_queue_depth 00:34:30.921 ************************************ 00:34:30.921 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:30.921 * Looking for test storage... 00:34:30.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.921 --rc genhtml_branch_coverage=1 00:34:30.921 --rc genhtml_function_coverage=1 00:34:30.921 --rc genhtml_legend=1 00:34:30.921 --rc geninfo_all_blocks=1 00:34:30.921 --rc geninfo_unexecuted_blocks=1 00:34:30.921 00:34:30.921 ' 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.921 --rc genhtml_branch_coverage=1 00:34:30.921 --rc genhtml_function_coverage=1 00:34:30.921 --rc genhtml_legend=1 00:34:30.921 --rc geninfo_all_blocks=1 00:34:30.921 --rc geninfo_unexecuted_blocks=1 00:34:30.921 00:34:30.921 ' 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.921 --rc genhtml_branch_coverage=1 00:34:30.921 --rc genhtml_function_coverage=1 00:34:30.921 --rc genhtml_legend=1 00:34:30.921 --rc geninfo_all_blocks=1 00:34:30.921 --rc geninfo_unexecuted_blocks=1 00:34:30.921 00:34:30.921 ' 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.921 --rc genhtml_branch_coverage=1 00:34:30.921 --rc genhtml_function_coverage=1 00:34:30.921 --rc genhtml_legend=1 00:34:30.921 --rc geninfo_all_blocks=1 00:34:30.921 --rc geninfo_unexecuted_blocks=1 00:34:30.921 00:34:30.921 ' 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:30.921 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:34:30.922 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:39.069 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:39.069 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:39.069 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:39.069 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:39.069 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:39.070 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:39.070 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:39.070 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:39.070 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:39.070 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:39.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:39.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:34:39.070 00:34:39.070 --- 10.0.0.2 ping statistics --- 00:34:39.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.070 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:39.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:39.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:34:39.071 00:34:39.071 --- 10.0.0.1 ping statistics --- 00:34:39.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.071 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=4129124 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 4129124 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4129124 ']' 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:39.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:39.071 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:39.071 [2024-11-27 10:06:53.791589] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:39.071 [2024-11-27 10:06:53.792737] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:34:39.071 [2024-11-27 10:06:53.792791] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:39.071 [2024-11-27 10:06:53.896141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.071 [2024-11-27 10:06:53.948173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:39.071 [2024-11-27 10:06:53.948224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:39.071 [2024-11-27 10:06:53.948233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:39.071 [2024-11-27 10:06:53.948240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:39.071 [2024-11-27 10:06:53.948246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:39.071 [2024-11-27 10:06:53.948974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.071 [2024-11-27 10:06:54.029219] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:39.071 [2024-11-27 10:06:54.029514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:39.333 [2024-11-27 10:06:54.661821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:39.333 Malloc0 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:39.333 [2024-11-27 10:06:54.746029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4129337 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4129337 /var/tmp/bdevperf.sock 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4129337 ']' 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:39.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:39.333 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:39.595 [2024-11-27 10:06:54.802958] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:34:39.595 [2024-11-27 10:06:54.803025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4129337 ] 00:34:39.595 [2024-11-27 10:06:54.877764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.595 [2024-11-27 10:06:54.930687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.167 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:40.167 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:40.167 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:40.167 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.167 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:40.428 NVMe0n1 00:34:40.429 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.429 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:40.429 Running I/O for 10 seconds... 00:34:42.760 8220.00 IOPS, 32.11 MiB/s [2024-11-27T09:06:59.167Z] 8704.00 IOPS, 34.00 MiB/s [2024-11-27T09:07:00.106Z] 8879.33 IOPS, 34.68 MiB/s [2024-11-27T09:07:01.054Z] 9987.50 IOPS, 39.01 MiB/s [2024-11-27T09:07:01.996Z] 10662.40 IOPS, 41.65 MiB/s [2024-11-27T09:07:02.938Z] 11135.50 IOPS, 43.50 MiB/s [2024-11-27T09:07:03.880Z] 11507.71 IOPS, 44.95 MiB/s [2024-11-27T09:07:04.822Z] 11742.50 IOPS, 45.87 MiB/s [2024-11-27T09:07:06.206Z] 11946.00 IOPS, 46.66 MiB/s [2024-11-27T09:07:06.206Z] 12120.10 IOPS, 47.34 MiB/s 00:34:50.740 Latency(us) 00:34:50.740 [2024-11-27T09:07:06.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.740 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:50.740 Verification LBA range: start 0x0 length 0x4000 00:34:50.740 NVMe0n1 : 10.05 12157.44 47.49 0.00 0.00 83932.03 10267.31 77332.48 00:34:50.740 [2024-11-27T09:07:06.206Z] =================================================================================================================== 00:34:50.740 [2024-11-27T09:07:06.206Z] Total : 12157.44 47.49 0.00 0.00 83932.03 10267.31 77332.48 00:34:50.740 { 00:34:50.740 "results": [ 00:34:50.740 { 00:34:50.740 "job": "NVMe0n1", 00:34:50.740 "core_mask": "0x1", 00:34:50.740 "workload": "verify", 00:34:50.740 "status": "finished", 00:34:50.740 "verify_range": { 00:34:50.740 "start": 0, 00:34:50.740 "length": 16384 00:34:50.740 }, 00:34:50.740 "queue_depth": 1024, 00:34:50.740 "io_size": 4096, 00:34:50.740 "runtime": 10.045039, 00:34:50.740 "iops": 12157.444087573975, 00:34:50.740 "mibps": 47.49001596708584, 00:34:50.740 "io_failed": 0, 00:34:50.740 "io_timeout": 0, 00:34:50.740 "avg_latency_us": 83932.03011545831, 00:34:50.740 "min_latency_us": 10267.306666666667, 00:34:50.740 "max_latency_us": 77332.48 00:34:50.740 } 00:34:50.740 ], 00:34:50.740 "core_count": 1 00:34:50.740 } 00:34:50.740 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4129337 00:34:50.740 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4129337 ']' 00:34:50.740 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4129337 00:34:50.740 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:50.740 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:50.740 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4129337 00:34:50.740 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:50.740 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:50.740 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4129337' 00:34:50.740 killing process with pid 4129337 00:34:50.740 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4129337 00:34:50.740 Received shutdown signal, test time was about 10.000000 seconds 00:34:50.740 00:34:50.740 Latency(us) 00:34:50.740 [2024-11-27T09:07:06.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.740 [2024-11-27T09:07:06.206Z] =================================================================================================================== 00:34:50.740 [2024-11-27T09:07:06.206Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:50.740 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4129337 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:50.740 rmmod nvme_tcp 00:34:50.740 rmmod nvme_fabrics 00:34:50.740 rmmod nvme_keyring 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 4129124 ']' 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 4129124 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4129124 ']' 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4129124 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:50.740 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:50.741 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4129124 00:34:50.741 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:50.741 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:50.741 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4129124' 00:34:50.741 killing process with pid 4129124 00:34:50.741 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4129124 00:34:50.741 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4129124 00:34:51.002 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:51.002 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:51.002 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:51.002 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:51.002 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:34:51.002 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:51.002 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:34:51.002 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:51.002 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:51.002 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.002 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.002 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.549 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:53.549 00:34:53.549 real 0m22.444s 00:34:53.549 user 0m24.492s 00:34:53.549 sys 0m7.515s 00:34:53.549 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.549 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:53.549 ************************************ 00:34:53.549 END TEST nvmf_queue_depth 00:34:53.549 ************************************ 00:34:53.549 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:53.549 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:53.549 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:53.549 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:53.550 ************************************ 00:34:53.550 START TEST nvmf_target_multipath 00:34:53.550 ************************************ 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:53.550 * Looking for test storage... 00:34:53.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.550 --rc genhtml_branch_coverage=1 00:34:53.550 --rc genhtml_function_coverage=1 00:34:53.550 --rc genhtml_legend=1 00:34:53.550 --rc geninfo_all_blocks=1 00:34:53.550 --rc geninfo_unexecuted_blocks=1 00:34:53.550 00:34:53.550 ' 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.550 --rc genhtml_branch_coverage=1 00:34:53.550 --rc genhtml_function_coverage=1 00:34:53.550 --rc genhtml_legend=1 00:34:53.550 --rc geninfo_all_blocks=1 00:34:53.550 --rc geninfo_unexecuted_blocks=1 00:34:53.550 00:34:53.550 ' 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.550 --rc genhtml_branch_coverage=1 00:34:53.550 --rc genhtml_function_coverage=1 00:34:53.550 --rc genhtml_legend=1 00:34:53.550 --rc geninfo_all_blocks=1 00:34:53.550 --rc geninfo_unexecuted_blocks=1 00:34:53.550 00:34:53.550 ' 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.550 --rc genhtml_branch_coverage=1 00:34:53.550 --rc genhtml_function_coverage=1 00:34:53.550 --rc genhtml_legend=1 00:34:53.550 --rc geninfo_all_blocks=1 00:34:53.550 --rc geninfo_unexecuted_blocks=1 00:34:53.550 00:34:53.550 ' 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.550 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:53.551 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:01.694 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:01.694 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:01.694 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:01.694 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.694 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.695 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:01.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:35:01.695 00:35:01.695 --- 10.0.0.2 ping statistics --- 00:35:01.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.695 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:35:01.695 00:35:01.695 --- 10.0.0.1 ping statistics --- 00:35:01.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.695 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:35:01.695 only one NIC for nvmf test 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:01.695 rmmod nvme_tcp 00:35:01.695 rmmod nvme_fabrics 00:35:01.695 rmmod nvme_keyring 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:01.695 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.078 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:03.078 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:35:03.078 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:35:03.078 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:03.078 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:03.078 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:03.078 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:03.078 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:03.078 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:03.078 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:03.078 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:03.078 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:03.079 00:35:03.079 real 0m9.959s 00:35:03.079 user 0m2.217s 00:35:03.079 sys 0m5.700s 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:03.079 ************************************ 00:35:03.079 END TEST nvmf_target_multipath 00:35:03.079 ************************************ 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:03.079 ************************************ 00:35:03.079 START TEST nvmf_zcopy 00:35:03.079 ************************************ 00:35:03.079 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:03.339 * Looking for test storage... 00:35:03.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:03.339 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:03.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.340 --rc genhtml_branch_coverage=1 00:35:03.340 --rc genhtml_function_coverage=1 00:35:03.340 --rc genhtml_legend=1 00:35:03.340 --rc geninfo_all_blocks=1 00:35:03.340 --rc geninfo_unexecuted_blocks=1 00:35:03.340 00:35:03.340 ' 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:03.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.340 --rc genhtml_branch_coverage=1 00:35:03.340 --rc genhtml_function_coverage=1 00:35:03.340 --rc genhtml_legend=1 00:35:03.340 --rc geninfo_all_blocks=1 00:35:03.340 --rc geninfo_unexecuted_blocks=1 00:35:03.340 00:35:03.340 ' 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:03.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.340 --rc genhtml_branch_coverage=1 00:35:03.340 --rc genhtml_function_coverage=1 00:35:03.340 --rc genhtml_legend=1 00:35:03.340 --rc geninfo_all_blocks=1 00:35:03.340 --rc geninfo_unexecuted_blocks=1 00:35:03.340 00:35:03.340 ' 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:03.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.340 --rc genhtml_branch_coverage=1 00:35:03.340 --rc genhtml_function_coverage=1 00:35:03.340 --rc genhtml_legend=1 00:35:03.340 --rc geninfo_all_blocks=1 00:35:03.340 --rc geninfo_unexecuted_blocks=1 00:35:03.340 00:35:03.340 ' 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.340 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:35:03.341 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:11.479 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:11.480 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:11.480 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:11.480 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:11.480 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:11.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:11.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:35:11.480 00:35:11.480 --- 10.0.0.2 ping statistics --- 00:35:11.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.480 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:11.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:11.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:35:11.480 00:35:11.480 --- 10.0.0.1 ping statistics --- 00:35:11.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.480 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:11.480 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:11.480 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=4139677 00:35:11.480 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 4139677 00:35:11.480 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:11.480 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 4139677 ']' 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:11.481 [2024-11-27 10:07:26.059881] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:11.481 [2024-11-27 10:07:26.061011] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:35:11.481 [2024-11-27 10:07:26.061066] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:11.481 [2024-11-27 10:07:26.159826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.481 [2024-11-27 10:07:26.196794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:11.481 [2024-11-27 10:07:26.196826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:11.481 [2024-11-27 10:07:26.196834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:11.481 [2024-11-27 10:07:26.196841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:11.481 [2024-11-27 10:07:26.196847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:11.481 [2024-11-27 10:07:26.197413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.481 [2024-11-27 10:07:26.251776] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:11.481 [2024-11-27 10:07:26.252024] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:11.481 [2024-11-27 10:07:26.902184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:11.481 [2024-11-27 10:07:26.930381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.481 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:11.742 malloc0 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:11.742 { 00:35:11.742 "params": { 00:35:11.742 "name": "Nvme$subsystem", 00:35:11.742 "trtype": "$TEST_TRANSPORT", 00:35:11.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:11.742 "adrfam": "ipv4", 00:35:11.742 "trsvcid": "$NVMF_PORT", 00:35:11.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:11.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:11.742 "hdgst": ${hdgst:-false}, 00:35:11.742 "ddgst": ${ddgst:-false} 00:35:11.742 }, 00:35:11.742 "method": "bdev_nvme_attach_controller" 00:35:11.742 } 00:35:11.742 EOF 00:35:11.742 )") 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:11.742 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:11.742 "params": { 00:35:11.742 "name": "Nvme1", 00:35:11.742 "trtype": "tcp", 00:35:11.742 "traddr": "10.0.0.2", 00:35:11.742 "adrfam": "ipv4", 00:35:11.742 "trsvcid": "4420", 00:35:11.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:11.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:11.742 "hdgst": false, 00:35:11.742 "ddgst": false 00:35:11.742 }, 00:35:11.742 "method": "bdev_nvme_attach_controller" 00:35:11.742 }' 00:35:11.742 [2024-11-27 10:07:27.031592] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:35:11.742 [2024-11-27 10:07:27.031642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4140013 ] 00:35:11.742 [2024-11-27 10:07:27.090959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.742 [2024-11-27 10:07:27.120919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.003 Running I/O for 10 seconds... 00:35:13.961 6631.00 IOPS, 51.80 MiB/s [2024-11-27T09:07:30.812Z] 6660.00 IOPS, 52.03 MiB/s [2024-11-27T09:07:31.753Z] 6661.00 IOPS, 52.04 MiB/s [2024-11-27T09:07:32.697Z] 6667.25 IOPS, 52.09 MiB/s [2024-11-27T09:07:33.636Z] 6674.40 IOPS, 52.14 MiB/s [2024-11-27T09:07:34.652Z] 6759.50 IOPS, 52.81 MiB/s [2024-11-27T09:07:35.639Z] 7177.71 IOPS, 56.08 MiB/s [2024-11-27T09:07:36.582Z] 7488.50 IOPS, 58.50 MiB/s [2024-11-27T09:07:37.526Z] 7734.33 IOPS, 60.42 MiB/s [2024-11-27T09:07:37.526Z] 7927.90 IOPS, 61.94 MiB/s 00:35:22.060 Latency(us) 00:35:22.060 [2024-11-27T09:07:37.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.060 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:22.060 Verification LBA range: start 0x0 length 0x1000 00:35:22.060 Nvme1n1 : 10.01 7932.85 61.98 0.00 0.00 16096.38 737.28 24466.77 00:35:22.060 [2024-11-27T09:07:37.526Z] =================================================================================================================== 00:35:22.060 [2024-11-27T09:07:37.526Z] Total : 7932.85 61.98 0.00 0.00 16096.38 737.28 24466.77 00:35:22.320 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4142027 00:35:22.320 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:22.320 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:22.320 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:22.320 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:22.320 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:22.320 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:22.320 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:22.320 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:22.320 { 00:35:22.320 "params": { 00:35:22.320 "name": "Nvme$subsystem", 00:35:22.320 "trtype": "$TEST_TRANSPORT", 00:35:22.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.320 "adrfam": "ipv4", 00:35:22.320 "trsvcid": "$NVMF_PORT", 00:35:22.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.320 "hdgst": ${hdgst:-false}, 00:35:22.320 "ddgst": ${ddgst:-false} 00:35:22.320 }, 00:35:22.320 "method": "bdev_nvme_attach_controller" 00:35:22.320 } 00:35:22.320 EOF 00:35:22.320 )") 00:35:22.320 [2024-11-27 10:07:37.557740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.320 [2024-11-27 10:07:37.557769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.320 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:22.320 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:22.320 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:22.320 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:22.320 "params": { 00:35:22.320 "name": "Nvme1", 00:35:22.320 "trtype": "tcp", 00:35:22.320 "traddr": "10.0.0.2", 00:35:22.320 "adrfam": "ipv4", 00:35:22.320 "trsvcid": "4420", 00:35:22.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:22.320 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:22.320 "hdgst": false, 00:35:22.320 "ddgst": false 00:35:22.320 }, 00:35:22.320 "method": "bdev_nvme_attach_controller" 00:35:22.320 }' 00:35:22.320 [2024-11-27 10:07:37.569709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.320 [2024-11-27 10:07:37.569718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.320 [2024-11-27 10:07:37.581706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.320 [2024-11-27 10:07:37.581714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.320 [2024-11-27 10:07:37.593706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.320 [2024-11-27 10:07:37.593713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.320 [2024-11-27 10:07:37.599577] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:35:22.320 [2024-11-27 10:07:37.599631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142027 ] 00:35:22.320 [2024-11-27 10:07:37.605706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.320 [2024-11-27 10:07:37.605714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.320 [2024-11-27 10:07:37.617705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.320 [2024-11-27 10:07:37.617714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.320 [2024-11-27 10:07:37.629706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.320 [2024-11-27 10:07:37.629713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.320 [2024-11-27 10:07:37.641706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.320 [2024-11-27 10:07:37.641712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.321 [2024-11-27 10:07:37.653705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.321 [2024-11-27 10:07:37.653713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.321 [2024-11-27 10:07:37.665706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.321 [2024-11-27 10:07:37.665713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.321 [2024-11-27 10:07:37.677706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.321 [2024-11-27 10:07:37.677713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.321 [2024-11-27 10:07:37.683356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.321 [2024-11-27 10:07:37.689707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.321 [2024-11-27 10:07:37.689721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.321 [2024-11-27 10:07:37.701706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.321 [2024-11-27 10:07:37.701716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.321 [2024-11-27 10:07:37.712275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.321 [2024-11-27 10:07:37.713716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.321 [2024-11-27 10:07:37.713725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.321 [2024-11-27 10:07:37.725711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.321 [2024-11-27 10:07:37.725723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.321 [2024-11-27 10:07:37.737710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.321 [2024-11-27 10:07:37.737722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.321 [2024-11-27 10:07:37.749707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.321 [2024-11-27 10:07:37.749716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.321 [2024-11-27 10:07:37.761709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.321 [2024-11-27 10:07:37.761720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.321 [2024-11-27 10:07:37.773706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.321 [2024-11-27 10:07:37.773713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.321 [2024-11-27 10:07:37.785714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.321 [2024-11-27 10:07:37.785730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.797709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.797718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.809707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.809717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.821707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.821716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.833706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.833712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.845705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.845712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.857706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.857713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.869707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.869715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.881705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.881712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.893706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.893713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.905706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.905718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.917706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.917714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.929706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.929712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.941705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.941712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:37.953706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:37.953713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:38.000631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:38.000645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:38.009708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:38.009717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 Running I/O for 5 seconds... 00:35:22.581 [2024-11-27 10:07:38.024658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.581 [2024-11-27 10:07:38.024675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.581 [2024-11-27 10:07:38.037666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.582 [2024-11-27 10:07:38.037682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.050672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.050688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.065124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.065140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.078071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.078086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.093043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.093058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.105782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.105796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.118210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.118223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.132719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.132733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.145517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.145531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.158064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.158078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.172552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.172566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.185434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.185453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.198010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.198023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.213021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.213036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.225907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.225921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.238793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.238808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.252610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.252624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.265328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.265342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.277750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.277764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.290445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.290459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:22.843 [2024-11-27 10:07:38.304910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:22.843 [2024-11-27 10:07:38.304925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.317828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.317843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.330486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.330500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.344595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.344609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.357259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.357273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.370154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.370172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.384741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.384756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.397389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.397405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.410152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.410171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.424644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.424660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.437475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.437495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.450326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.450340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.464468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.464482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.477148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.477167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.104 [2024-11-27 10:07:38.489862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.104 [2024-11-27 10:07:38.489878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.105 [2024-11-27 10:07:38.502523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.105 [2024-11-27 10:07:38.502537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.105 [2024-11-27 10:07:38.516862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.105 [2024-11-27 10:07:38.516877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.105 [2024-11-27 10:07:38.529708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.105 [2024-11-27 10:07:38.529723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.105 [2024-11-27 10:07:38.542351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.105 [2024-11-27 10:07:38.542365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.105 [2024-11-27 10:07:38.556580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.105 [2024-11-27 10:07:38.556595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.105 [2024-11-27 10:07:38.569396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.105 [2024-11-27 10:07:38.569411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.582055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.582070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.596453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.596468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.609094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.609109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.621917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.621932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.634761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.634776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.649065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.649080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.662088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.662102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.676765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.676780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.689440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.689462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.702186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.702200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.716603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.716617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.729506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.729521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.742156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.742175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.756958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.756973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.365 [2024-11-27 10:07:38.770024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.365 [2024-11-27 10:07:38.770039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.366 [2024-11-27 10:07:38.784745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.366 [2024-11-27 10:07:38.784760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.366 [2024-11-27 10:07:38.797629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.366 [2024-11-27 10:07:38.797644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.366 [2024-11-27 10:07:38.810157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.366 [2024-11-27 10:07:38.810176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.366 [2024-11-27 10:07:38.824528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.366 [2024-11-27 10:07:38.824543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:38.837634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:38.837649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:38.850667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:38.850681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:38.865618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:38.865632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:38.878165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:38.878179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:38.893132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:38.893147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:38.906010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:38.906024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:38.920975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:38.920990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:38.934124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:38.934138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:38.948693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:38.948708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:38.961778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:38.961792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:38.974601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:38.974615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:38.989597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:38.989612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:39.002387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:39.002402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:39.016678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:39.016692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 19335.00 IOPS, 151.05 MiB/s [2024-11-27T09:07:39.092Z] [2024-11-27 10:07:39.030030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:39.030044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:39.044626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:39.044641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:39.057931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:39.057945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:39.070884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:39.070899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.626 [2024-11-27 10:07:39.084853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.626 [2024-11-27 10:07:39.084867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.888 [2024-11-27 10:07:39.097953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.888 [2024-11-27 10:07:39.097968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.888 [2024-11-27 10:07:39.111060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.888 [2024-11-27 10:07:39.111074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.888 [2024-11-27 10:07:39.124944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.888 [2024-11-27 10:07:39.124959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.888 [2024-11-27 10:07:39.138149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.888 [2024-11-27 10:07:39.138168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.888 [2024-11-27 10:07:39.153039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.888 [2024-11-27 10:07:39.153053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.888 [2024-11-27 10:07:39.165911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.888 [2024-11-27 10:07:39.165926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.888 [2024-11-27 10:07:39.178948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.888 [2024-11-27 10:07:39.178962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.888 [2024-11-27 10:07:39.192787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.888 [2024-11-27 10:07:39.192802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.888 [2024-11-27 10:07:39.205914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.888 [2024-11-27 10:07:39.205928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.888 [2024-11-27 10:07:39.218627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.888 [2024-11-27 10:07:39.218641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.889 [2024-11-27 10:07:39.232753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.889 [2024-11-27 10:07:39.232766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.889 [2024-11-27 10:07:39.245768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.889 [2024-11-27 10:07:39.245782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.889 [2024-11-27 10:07:39.258728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.889 [2024-11-27 10:07:39.258742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.889 [2024-11-27 10:07:39.272715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.889 [2024-11-27 10:07:39.272729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.889 [2024-11-27 10:07:39.285793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.889 [2024-11-27 10:07:39.285807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.889 [2024-11-27 10:07:39.299063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.889 [2024-11-27 10:07:39.299077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.889 [2024-11-27 10:07:39.313328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.889 [2024-11-27 10:07:39.313342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.889 [2024-11-27 10:07:39.326874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.889 [2024-11-27 10:07:39.326888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.889 [2024-11-27 10:07:39.341231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.889 [2024-11-27 10:07:39.341246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.149 [2024-11-27 10:07:39.354245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.149 [2024-11-27 10:07:39.354259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.149 [2024-11-27 10:07:39.369102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.149 [2024-11-27 10:07:39.369116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.149 [2024-11-27 10:07:39.382128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.149 [2024-11-27 10:07:39.382142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.149 [2024-11-27 10:07:39.396476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.149 [2024-11-27 10:07:39.396490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.149 [2024-11-27 10:07:39.409257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.149 [2024-11-27 10:07:39.409271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.149 [2024-11-27 10:07:39.422495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.149 [2024-11-27 10:07:39.422510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.150 [2024-11-27 10:07:39.436816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.150 [2024-11-27 10:07:39.436832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.150 [2024-11-27 10:07:39.450087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.150 [2024-11-27 10:07:39.450104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.150 [2024-11-27 10:07:39.464899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.150 [2024-11-27 10:07:39.464913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.150 [2024-11-27 10:07:39.478127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.150 [2024-11-27 10:07:39.478141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.150 [2024-11-27 10:07:39.492746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.150 [2024-11-27 10:07:39.492760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.150 [2024-11-27 10:07:39.506020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.150 [2024-11-27 10:07:39.506033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.150 [2024-11-27 10:07:39.520457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.150 [2024-11-27 10:07:39.520472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.150 [2024-11-27 10:07:39.533551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.150 [2024-11-27 10:07:39.533566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.150 [2024-11-27 10:07:39.546491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.150 [2024-11-27 10:07:39.546505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.150 [2024-11-27 10:07:39.561042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.150 [2024-11-27 10:07:39.561056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.150 [2024-11-27 10:07:39.574184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.150 [2024-11-27 10:07:39.574198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.150 [2024-11-27 10:07:39.588664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.150 [2024-11-27 10:07:39.588679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.150 [2024-11-27 10:07:39.602047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.150 [2024-11-27 10:07:39.602061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.435 [2024-11-27 10:07:39.616546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.435 [2024-11-27 10:07:39.616561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.435 [2024-11-27 10:07:39.629512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.435 [2024-11-27 10:07:39.629527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.435 [2024-11-27 10:07:39.642396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.435 [2024-11-27 10:07:39.642410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.435 [2024-11-27 10:07:39.656652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.435 [2024-11-27 10:07:39.656666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.435 [2024-11-27 10:07:39.669657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.435 [2024-11-27 10:07:39.669671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.435 [2024-11-27 10:07:39.682860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.435 [2024-11-27 10:07:39.682874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.435 [2024-11-27 10:07:39.696802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.435 [2024-11-27 10:07:39.696816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.435 [2024-11-27 10:07:39.709925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.435 [2024-11-27 10:07:39.709943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.435 [2024-11-27 10:07:39.722476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.435 [2024-11-27 10:07:39.722490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.435 [2024-11-27 10:07:39.736930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.435 [2024-11-27 10:07:39.736944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.435 [2024-11-27 10:07:39.749999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.435 [2024-11-27 10:07:39.750013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.435 [2024-11-27 10:07:39.765090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.436 [2024-11-27 10:07:39.765104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.436 [2024-11-27 10:07:39.778394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.436 [2024-11-27 10:07:39.778408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.436 [2024-11-27 10:07:39.792937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.436 [2024-11-27 10:07:39.792951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.436 [2024-11-27 10:07:39.805802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.436 [2024-11-27 10:07:39.805817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.436 [2024-11-27 10:07:39.818448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.436 [2024-11-27 10:07:39.818462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.436 [2024-11-27 10:07:39.832796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.436 [2024-11-27 10:07:39.832811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.436 [2024-11-27 10:07:39.845994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.436 [2024-11-27 10:07:39.846008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.436 [2024-11-27 10:07:39.860788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.436 [2024-11-27 10:07:39.860803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.436 [2024-11-27 10:07:39.873928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.436 [2024-11-27 10:07:39.873943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.436 [2024-11-27 10:07:39.887082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.436 [2024-11-27 10:07:39.887096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:39.901049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:39.901066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:39.914140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:39.914153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:39.929135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:39.929150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:39.942211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:39.942224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:39.956710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:39.956724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:39.969496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:39.969514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:39.982850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:39.982864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:39.996720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:39.996734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:40.010228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:40.010242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 19194.00 IOPS, 149.95 MiB/s [2024-11-27T09:07:40.163Z] [2024-11-27 10:07:40.024540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:40.024557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:40.037693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:40.037708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:40.050512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:40.050528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:40.064775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:40.064790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:40.077866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:40.077880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:40.090983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:40.090998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:40.105026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:40.105041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:40.117909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:40.117924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:40.130822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:40.130836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:40.144965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:40.144980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.697 [2024-11-27 10:07:40.158283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.697 [2024-11-27 10:07:40.158297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.172767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.172782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.185998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.186012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.200624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.200639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.213842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.213857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.226602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.226616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.240848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.240863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.253917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.253932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.266533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.266547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.280516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.280530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.293446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.293461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.306885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.306899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.320981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.320996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.334071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.334085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.348699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.348714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.362035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.362050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.376559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.376573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.389691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.389706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.402475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.402489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.958 [2024-11-27 10:07:40.416655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.958 [2024-11-27 10:07:40.416670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.218 [2024-11-27 10:07:40.429481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.218 [2024-11-27 10:07:40.429496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.218 [2024-11-27 10:07:40.442302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.218 [2024-11-27 10:07:40.442317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.218 [2024-11-27 10:07:40.457034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.218 [2024-11-27 10:07:40.457050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.218 [2024-11-27 10:07:40.470010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.218 [2024-11-27 10:07:40.470024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.218 [2024-11-27 10:07:40.485349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.218 [2024-11-27 10:07:40.485364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.218 [2024-11-27 10:07:40.498346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.218 [2024-11-27 10:07:40.498361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.218 [2024-11-27 10:07:40.512503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.218 [2024-11-27 10:07:40.512518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.218 [2024-11-27 10:07:40.525473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.218 [2024-11-27 10:07:40.525488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.218 [2024-11-27 10:07:40.538427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.219 [2024-11-27 10:07:40.538441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.219 [2024-11-27 10:07:40.553076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.219 [2024-11-27 10:07:40.553091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.219 [2024-11-27 10:07:40.566441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.219 [2024-11-27 10:07:40.566455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.219 [2024-11-27 10:07:40.580591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.219 [2024-11-27 10:07:40.580606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.219 [2024-11-27 10:07:40.593794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.219 [2024-11-27 10:07:40.593809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.219 [2024-11-27 10:07:40.607038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.219 [2024-11-27 10:07:40.607052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.219 [2024-11-27 10:07:40.620606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.219 [2024-11-27 10:07:40.620621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.219 [2024-11-27 10:07:40.633485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.219 [2024-11-27 10:07:40.633500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.219 [2024-11-27 10:07:40.646286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.219 [2024-11-27 10:07:40.646300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.219 [2024-11-27 10:07:40.661616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.219 [2024-11-27 10:07:40.661631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.219 [2024-11-27 10:07:40.674602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.219 [2024-11-27 10:07:40.674616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.478 [2024-11-27 10:07:40.689080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.478 [2024-11-27 10:07:40.689095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.478 [2024-11-27 10:07:40.702199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.702213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.717132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.717147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.730303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.730317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.744774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.744788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.758052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.758066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.773138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.773152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.786193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.786208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.800727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.800742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.814004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.814018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.828421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.828435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.841506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.841521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.854967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.854981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.868977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.868992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.882154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.882174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.897000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.897015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.910275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.910289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.925275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.925290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.479 [2024-11-27 10:07:40.938103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.479 [2024-11-27 10:07:40.938117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.739 [2024-11-27 10:07:40.952854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:40.952869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:40.966040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:40.966054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:40.980605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:40.980620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:40.993982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:40.993995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.008902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.008916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 19147.00 IOPS, 149.59 MiB/s [2024-11-27T09:07:41.206Z] [2024-11-27 10:07:41.022090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.022105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.036590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.036604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.049806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.049821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.062753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.062767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.076864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.076878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.089912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.089926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.102176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.102190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.117236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.117250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.130132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.130145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.144860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.144875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.157881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.157895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.170484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.170498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.184612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.184626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.740 [2024-11-27 10:07:41.197624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.740 [2024-11-27 10:07:41.197638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.210214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.210229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.224568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.224582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.238258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.238272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.253173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.253191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.266315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.266329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.280914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.280928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.294010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.294023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.308755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.308769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.321604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.321618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.334500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.334514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.349266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.349281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.362742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.362756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.376878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.376893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.389971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.389985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.402838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.402852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.416694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.416708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.430060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.430073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.444988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.445002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.000 [2024-11-27 10:07:41.458379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.000 [2024-11-27 10:07:41.458393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.473226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.473241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.486520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.486534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.500955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.500969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.514086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.514104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.528819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.528833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.541675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.541690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.554837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.554851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.568704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.568719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.581410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.581424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.594809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.594823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.609037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.609051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.621769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.621784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.634878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.634892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.649787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.649802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.662599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.662614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.676649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.676663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.689494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.689509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.702635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.702649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.261 [2024-11-27 10:07:41.716996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.261 [2024-11-27 10:07:41.717010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.730418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.730432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.745454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.745468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.758820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.758835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.773214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.773232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.786528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.786543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.800632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.800646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.813741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.813755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.826546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.826560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.841108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.841124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.854235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.854249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.869241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.869255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.882399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.882413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.896674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.896688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.909646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.909661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.923029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.923044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.937003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.937017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.950249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.950264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.965099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.965113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.978148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.978166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:41.992625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:41.992640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:42.005814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:42.005829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 [2024-11-27 10:07:42.018860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:42.018874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.577 19108.50 IOPS, 149.29 MiB/s [2024-11-27T09:07:42.043Z] [2024-11-27 10:07:42.033036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.577 [2024-11-27 10:07:42.033051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.045912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.045928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.058826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.058841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.072579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.072593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.085466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.085481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.098121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.098135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.112870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.112884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.125926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.125941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.138617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.138632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.153259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.153274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.166789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.166804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.180988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.181002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.193865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.193880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.206843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.206858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.221286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.221301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.234403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.234417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.248997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.249012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.262195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.262209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.277368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.277382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.837 [2024-11-27 10:07:42.290447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.837 [2024-11-27 10:07:42.290462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.304818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.304833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.318056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.318070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.332734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.332749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.345847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.345861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.358819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.358834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.372710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.372725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.385318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.385333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.398851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.398865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.412691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.412707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.425997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.426012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.440869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.440884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.453592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.453606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.466776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.466791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.480918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.480933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.493989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.494003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.508752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.508767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.521898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.521913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.534620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.534642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.548820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.548835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.097 [2024-11-27 10:07:42.561793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.097 [2024-11-27 10:07:42.561808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.574589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.574603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.588970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.588985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.601965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.601980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.614930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.614944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.628971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.628986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.642340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.642355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.656886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.656901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.669959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.669974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.682676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.682691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.696761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.696775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.709992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.710005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.725313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.725328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.738509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.738524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.752912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.752926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.766116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.766130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.780483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.780497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.793553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.793571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.806584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.806599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.358 [2024-11-27 10:07:42.820938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.358 [2024-11-27 10:07:42.820952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:42.833966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:42.833981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:42.846752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:42.846767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:42.860849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:42.860863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:42.873746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:42.873761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:42.887068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:42.887082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:42.901019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:42.901033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:42.914097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:42.914111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:42.928695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:42.928709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:42.941854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:42.941868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:42.954564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:42.954577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:42.969119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:42.969134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:42.982341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:42.982355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:42.997310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:42.997325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:43.010635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:43.010649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:43.024752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:43.024766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 19089.80 IOPS, 149.14 MiB/s 00:35:27.617 Latency(us) 00:35:27.617 [2024-11-27T09:07:43.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.617 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:27.617 Nvme1n1 : 5.01 19095.96 149.19 0.00 0.00 6697.86 2034.35 11250.35 00:35:27.617 [2024-11-27T09:07:43.083Z] =================================================================================================================== 00:35:27.617 [2024-11-27T09:07:43.083Z] Total : 19095.96 149.19 0.00 0.00 6697.86 2034.35 11250.35 00:35:27.617 [2024-11-27 10:07:43.033714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:43.033729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:43.045711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:43.045723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:43.057715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:43.057727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:43.069710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:43.069723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.617 [2024-11-27 10:07:43.081709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.617 [2024-11-27 10:07:43.081720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.877 [2024-11-27 10:07:43.093707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.877 [2024-11-27 10:07:43.093719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.877 [2024-11-27 10:07:43.105707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.877 [2024-11-27 10:07:43.105715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.877 [2024-11-27 10:07:43.117709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.877 [2024-11-27 10:07:43.117718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.877 [2024-11-27 10:07:43.129706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.877 [2024-11-27 10:07:43.129715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4142027) - No such process 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4142027 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.877 delay0 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.877 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:35:27.877 [2024-11-27 10:07:43.254597] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:34.473 Initializing NVMe Controllers 00:35:34.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:34.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:34.473 Initialization complete. Launching workers. 00:35:34.473 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2637 00:35:34.473 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2916, failed to submit 41 00:35:34.473 success 2764, unsuccessful 152, failed 0 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:34.473 rmmod nvme_tcp 00:35:34.473 rmmod nvme_fabrics 00:35:34.473 rmmod nvme_keyring 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 4139677 ']' 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 4139677 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 4139677 ']' 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 4139677 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4139677 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4139677' 00:35:34.473 killing process with pid 4139677 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 4139677 00:35:34.473 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 4139677 00:35:34.734 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:34.734 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:34.734 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:34.734 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:34.734 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:35:34.734 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:34.734 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:35:34.734 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:34.734 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:34.734 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.734 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:34.734 10:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.646 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:36.646 00:35:36.646 real 0m33.521s 00:35:36.646 user 0m43.041s 00:35:36.646 sys 0m11.978s 00:35:36.646 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:36.646 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:36.646 ************************************ 00:35:36.646 END TEST nvmf_zcopy 00:35:36.647 ************************************ 00:35:36.647 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:36.647 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:36.647 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:36.647 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:36.908 ************************************ 00:35:36.908 START TEST nvmf_nmic 00:35:36.908 ************************************ 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:36.908 * Looking for test storage... 00:35:36.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:36.908 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:36.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.909 --rc genhtml_branch_coverage=1 00:35:36.909 --rc genhtml_function_coverage=1 00:35:36.909 --rc genhtml_legend=1 00:35:36.909 --rc geninfo_all_blocks=1 00:35:36.909 --rc geninfo_unexecuted_blocks=1 00:35:36.909 00:35:36.909 ' 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:36.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.909 --rc genhtml_branch_coverage=1 00:35:36.909 --rc genhtml_function_coverage=1 00:35:36.909 --rc genhtml_legend=1 00:35:36.909 --rc geninfo_all_blocks=1 00:35:36.909 --rc geninfo_unexecuted_blocks=1 00:35:36.909 00:35:36.909 ' 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:36.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.909 --rc genhtml_branch_coverage=1 00:35:36.909 --rc genhtml_function_coverage=1 00:35:36.909 --rc genhtml_legend=1 00:35:36.909 --rc geninfo_all_blocks=1 00:35:36.909 --rc geninfo_unexecuted_blocks=1 00:35:36.909 00:35:36.909 ' 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:36.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.909 --rc genhtml_branch_coverage=1 00:35:36.909 --rc genhtml_function_coverage=1 00:35:36.909 --rc genhtml_legend=1 00:35:36.909 --rc geninfo_all_blocks=1 00:35:36.909 --rc geninfo_unexecuted_blocks=1 00:35:36.909 00:35:36.909 ' 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:36.909 10:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:45.044 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.044 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:45.045 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:45.045 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:45.045 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:35:45.045 00:35:45.045 --- 10.0.0.2 ping statistics --- 00:35:45.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.045 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:35:45.045 00:35:45.045 --- 10.0.0.1 ping statistics --- 00:35:45.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.045 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=4148357 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 4148357 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 4148357 ']' 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.045 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.045 [2024-11-27 10:07:59.878693] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:45.045 [2024-11-27 10:07:59.879826] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:35:45.045 [2024-11-27 10:07:59.879878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.045 [2024-11-27 10:07:59.980442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:45.045 [2024-11-27 10:08:00.038567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.045 [2024-11-27 10:08:00.038618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.045 [2024-11-27 10:08:00.038628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.045 [2024-11-27 10:08:00.038635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.045 [2024-11-27 10:08:00.038642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.045 [2024-11-27 10:08:00.040743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.045 [2024-11-27 10:08:00.040899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:45.045 [2024-11-27 10:08:00.041029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.045 [2024-11-27 10:08:00.041030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:45.045 [2024-11-27 10:08:00.123111] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:45.045 [2024-11-27 10:08:00.124346] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:45.045 [2024-11-27 10:08:00.124514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:45.045 [2024-11-27 10:08:00.124897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:45.045 [2024-11-27 10:08:00.124983] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:45.305 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.305 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:35:45.305 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:45.305 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.305 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.305 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.305 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:45.305 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.305 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.305 [2024-11-27 10:08:00.741906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.305 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.565 Malloc0 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.565 [2024-11-27 10:08:00.834247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:45.565 test case1: single bdev can't be used in multiple subsystems 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.565 [2024-11-27 10:08:00.869510] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:45.565 [2024-11-27 10:08:00.869539] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:45.565 [2024-11-27 10:08:00.869548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.565 request: 00:35:45.565 { 00:35:45.565 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:45.565 "namespace": { 00:35:45.565 "bdev_name": "Malloc0", 00:35:45.565 "no_auto_visible": false 00:35:45.565 }, 00:35:45.565 "method": "nvmf_subsystem_add_ns", 00:35:45.565 "req_id": 1 00:35:45.565 } 00:35:45.565 Got JSON-RPC error response 00:35:45.565 response: 00:35:45.565 { 00:35:45.565 "code": -32602, 00:35:45.565 "message": "Invalid parameters" 00:35:45.565 } 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:45.565 Adding namespace failed - expected result. 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:45.565 test case2: host connect to nvmf target in multiple paths 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.565 [2024-11-27 10:08:00.881660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.565 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:45.825 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:35:46.393 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:46.393 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:35:46.393 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:46.393 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:46.393 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:35:48.302 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:48.302 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:48.302 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:48.302 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:48.302 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:48.302 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:35:48.302 10:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:48.302 [global] 00:35:48.302 thread=1 00:35:48.302 invalidate=1 00:35:48.302 rw=write 00:35:48.302 time_based=1 00:35:48.302 runtime=1 00:35:48.302 ioengine=libaio 00:35:48.302 direct=1 00:35:48.302 bs=4096 00:35:48.302 iodepth=1 00:35:48.302 norandommap=0 00:35:48.302 numjobs=1 00:35:48.302 00:35:48.302 verify_dump=1 00:35:48.302 verify_backlog=512 00:35:48.302 verify_state_save=0 00:35:48.302 do_verify=1 00:35:48.302 verify=crc32c-intel 00:35:48.302 [job0] 00:35:48.302 filename=/dev/nvme0n1 00:35:48.302 Could not set queue depth (nvme0n1) 00:35:48.894 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:48.894 fio-3.35 00:35:48.894 Starting 1 thread 00:35:49.837 00:35:49.837 job0: (groupid=0, jobs=1): err= 0: pid=4149421: Wed Nov 27 10:08:05 2024 00:35:49.837 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:49.837 slat (nsec): min=8136, max=59910, avg=26061.76, stdev=3762.21 00:35:49.837 clat (usec): min=741, max=1181, avg=1010.94, stdev=83.45 00:35:49.837 lat (usec): min=767, max=1207, avg=1037.00, stdev=83.16 00:35:49.837 clat percentiles (usec): 00:35:49.837 | 1.00th=[ 783], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 947], 00:35:49.837 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:35:49.837 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1123], 00:35:49.837 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1188], 99.95th=[ 1188], 00:35:49.837 | 99.99th=[ 1188] 00:35:49.837 write: IOPS=745, BW=2981KiB/s (3053kB/s)(2984KiB/1001msec); 0 zone resets 00:35:49.837 slat (nsec): min=9445, max=69610, avg=28471.05, stdev=10132.34 00:35:49.837 clat (usec): min=305, max=793, avg=587.48, stdev=91.20 00:35:49.837 lat (usec): min=334, max=838, avg=615.95, stdev=95.89 00:35:49.837 clat percentiles (usec): 00:35:49.837 | 1.00th=[ 351], 5.00th=[ 416], 10.00th=[ 469], 20.00th=[ 510], 00:35:49.837 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 619], 00:35:49.837 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 693], 95.00th=[ 717], 00:35:49.837 | 99.00th=[ 766], 99.50th=[ 775], 99.90th=[ 791], 99.95th=[ 791], 00:35:49.837 | 99.99th=[ 791] 00:35:49.837 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:35:49.837 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:49.837 lat (usec) : 500=10.57%, 750=47.77%, 1000=15.10% 00:35:49.837 lat (msec) : 2=26.55% 00:35:49.837 cpu : usr=1.80%, sys=3.60%, ctx=1258, majf=0, minf=1 00:35:49.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.837 issued rwts: total=512,746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.837 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:49.837 00:35:49.837 Run status group 0 (all jobs): 00:35:49.837 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:35:49.837 WRITE: bw=2981KiB/s (3053kB/s), 2981KiB/s-2981KiB/s (3053kB/s-3053kB/s), io=2984KiB (3056kB), run=1001-1001msec 00:35:49.837 00:35:49.837 Disk stats (read/write): 00:35:49.837 nvme0n1: ios=562/583, merge=0/0, ticks=549/322, in_queue=871, util=93.19% 00:35:49.837 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:50.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:50.100 rmmod nvme_tcp 00:35:50.100 rmmod nvme_fabrics 00:35:50.100 rmmod nvme_keyring 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 4148357 ']' 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 4148357 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 4148357 ']' 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 4148357 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:50.100 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4148357 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4148357' 00:35:50.361 killing process with pid 4148357 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 4148357 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 4148357 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:50.361 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.912 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:52.912 00:35:52.912 real 0m15.663s 00:35:52.912 user 0m33.506s 00:35:52.912 sys 0m7.576s 00:35:52.912 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.912 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:52.912 ************************************ 00:35:52.912 END TEST nvmf_nmic 00:35:52.912 ************************************ 00:35:52.912 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:52.912 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:52.912 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.912 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:52.912 ************************************ 00:35:52.912 START TEST nvmf_fio_target 00:35:52.912 ************************************ 00:35:52.912 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:52.912 * Looking for test storage... 00:35:52.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:52.912 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:52.912 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:52.912 10:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:52.912 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:52.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.913 --rc genhtml_branch_coverage=1 00:35:52.913 --rc genhtml_function_coverage=1 00:35:52.913 --rc genhtml_legend=1 00:35:52.913 --rc geninfo_all_blocks=1 00:35:52.913 --rc geninfo_unexecuted_blocks=1 00:35:52.913 00:35:52.913 ' 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:52.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.913 --rc genhtml_branch_coverage=1 00:35:52.913 --rc genhtml_function_coverage=1 00:35:52.913 --rc genhtml_legend=1 00:35:52.913 --rc geninfo_all_blocks=1 00:35:52.913 --rc geninfo_unexecuted_blocks=1 00:35:52.913 00:35:52.913 ' 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:52.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.913 --rc genhtml_branch_coverage=1 00:35:52.913 --rc genhtml_function_coverage=1 00:35:52.913 --rc genhtml_legend=1 00:35:52.913 --rc geninfo_all_blocks=1 00:35:52.913 --rc geninfo_unexecuted_blocks=1 00:35:52.913 00:35:52.913 ' 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:52.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.913 --rc genhtml_branch_coverage=1 00:35:52.913 --rc genhtml_function_coverage=1 00:35:52.913 --rc genhtml_legend=1 00:35:52.913 --rc geninfo_all_blocks=1 00:35:52.913 --rc geninfo_unexecuted_blocks=1 00:35:52.913 00:35:52.913 ' 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:52.913 10:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:01.071 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:01.072 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:01.072 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:01.072 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:01.072 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:01.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:01.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:36:01.072 00:36:01.072 --- 10.0.0.2 ping statistics --- 00:36:01.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.072 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:01.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:01.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:36:01.072 00:36:01.072 --- 10.0.0.1 ping statistics --- 00:36:01.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.072 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:01.072 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:01.073 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:01.073 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:01.073 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:01.073 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:01.073 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=4153884 00:36:01.073 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 4153884 00:36:01.073 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:01.073 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 4153884 ']' 00:36:01.073 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.073 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:01.073 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.073 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:01.073 10:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:01.073 [2024-11-27 10:08:15.610262] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:01.073 [2024-11-27 10:08:15.611397] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:36:01.073 [2024-11-27 10:08:15.611450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.073 [2024-11-27 10:08:15.710623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:01.073 [2024-11-27 10:08:15.763653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.073 [2024-11-27 10:08:15.763707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.073 [2024-11-27 10:08:15.763723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.073 [2024-11-27 10:08:15.763730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.073 [2024-11-27 10:08:15.763736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.073 [2024-11-27 10:08:15.765759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.073 [2024-11-27 10:08:15.765919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:01.073 [2024-11-27 10:08:15.766076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:01.073 [2024-11-27 10:08:15.766076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:01.073 [2024-11-27 10:08:15.842985] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:01.073 [2024-11-27 10:08:15.843920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:01.073 [2024-11-27 10:08:15.844197] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:01.073 [2024-11-27 10:08:15.844870] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:01.073 [2024-11-27 10:08:15.844920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:01.073 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:01.073 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:36:01.073 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:01.073 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:01.073 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:01.073 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:01.073 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:01.334 [2024-11-27 10:08:16.627118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.334 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:01.594 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:01.594 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:01.855 10:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:01.855 10:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:01.855 10:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:01.856 10:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:02.117 10:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:02.117 10:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:36:02.378 10:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:02.640 10:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:36:02.640 10:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:02.640 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:36:02.640 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:02.901 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:36:02.901 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:03.161 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:03.422 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:03.422 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:03.422 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:03.422 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:03.684 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:03.944 [2024-11-27 10:08:19.187059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.944 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:04.204 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:04.204 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:04.775 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:04.775 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:36:04.775 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:04.775 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:36:04.775 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:36:04.775 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:36:06.689 10:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:06.689 10:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:06.689 10:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:06.689 10:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:36:06.689 10:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:06.689 10:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:36:06.689 10:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:06.689 [global] 00:36:06.689 thread=1 00:36:06.689 invalidate=1 00:36:06.689 rw=write 00:36:06.689 time_based=1 00:36:06.689 runtime=1 00:36:06.689 ioengine=libaio 00:36:06.689 direct=1 00:36:06.689 bs=4096 00:36:06.689 iodepth=1 00:36:06.689 norandommap=0 00:36:06.689 numjobs=1 00:36:06.689 00:36:06.689 verify_dump=1 00:36:06.689 verify_backlog=512 00:36:06.689 verify_state_save=0 00:36:06.689 do_verify=1 00:36:06.689 verify=crc32c-intel 00:36:06.689 [job0] 00:36:06.689 filename=/dev/nvme0n1 00:36:06.689 [job1] 00:36:06.689 filename=/dev/nvme0n2 00:36:06.689 [job2] 00:36:06.689 filename=/dev/nvme0n3 00:36:06.689 [job3] 00:36:06.689 filename=/dev/nvme0n4 00:36:06.968 Could not set queue depth (nvme0n1) 00:36:06.968 Could not set queue depth (nvme0n2) 00:36:06.968 Could not set queue depth (nvme0n3) 00:36:06.968 Could not set queue depth (nvme0n4) 00:36:07.229 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:07.229 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:07.229 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:07.229 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:07.229 fio-3.35 00:36:07.229 Starting 4 threads 00:36:08.614 00:36:08.614 job0: (groupid=0, jobs=1): err= 0: pid=4155336: Wed Nov 27 10:08:23 2024 00:36:08.614 read: IOPS=17, BW=70.7KiB/s (72.4kB/s)(72.0KiB/1019msec) 00:36:08.614 slat (nsec): min=26848, max=27637, avg=27085.11, stdev=241.65 00:36:08.614 clat (usec): min=1178, max=42132, avg=39286.12, stdev=9523.37 00:36:08.614 lat (usec): min=1205, max=42159, avg=39313.21, stdev=9523.23 00:36:08.614 clat percentiles (usec): 00:36:08.614 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[40633], 20.00th=[41157], 00:36:08.614 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:36:08.614 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:08.614 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:08.614 | 99.99th=[42206] 00:36:08.614 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:36:08.614 slat (nsec): min=9310, max=70717, avg=32261.04, stdev=8928.18 00:36:08.614 clat (usec): min=217, max=1644, avg=569.09, stdev=148.17 00:36:08.614 lat (usec): min=244, max=1681, avg=601.36, stdev=150.47 00:36:08.614 clat percentiles (usec): 00:36:08.614 | 1.00th=[ 262], 5.00th=[ 330], 10.00th=[ 375], 20.00th=[ 441], 00:36:08.614 | 30.00th=[ 498], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 603], 00:36:08.614 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 750], 95.00th=[ 791], 00:36:08.614 | 99.00th=[ 889], 99.50th=[ 947], 99.90th=[ 1647], 99.95th=[ 1647], 00:36:08.614 | 99.99th=[ 1647] 00:36:08.614 bw ( KiB/s): min= 4096, max= 4096, per=39.41%, avg=4096.00, stdev= 0.00, samples=1 00:36:08.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:08.614 lat (usec) : 250=0.75%, 500=28.87%, 750=57.17%, 1000=9.43% 00:36:08.614 lat (msec) : 2=0.57%, 50=3.21% 00:36:08.614 cpu : usr=0.59%, sys=2.55%, ctx=531, majf=0, minf=1 00:36:08.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:08.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.614 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:08.614 job1: (groupid=0, jobs=1): err= 0: pid=4155353: Wed Nov 27 10:08:23 2024 00:36:08.614 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:08.614 slat (nsec): min=6816, max=67554, avg=27794.95, stdev=4128.22 00:36:08.614 clat (usec): min=465, max=1198, avg=906.31, stdev=118.26 00:36:08.614 lat (usec): min=492, max=1225, avg=934.10, stdev=118.20 00:36:08.614 clat percentiles (usec): 00:36:08.614 | 1.00th=[ 537], 5.00th=[ 701], 10.00th=[ 775], 20.00th=[ 816], 00:36:08.614 | 30.00th=[ 848], 40.00th=[ 881], 50.00th=[ 922], 60.00th=[ 955], 00:36:08.614 | 70.00th=[ 979], 80.00th=[ 1004], 90.00th=[ 1037], 95.00th=[ 1074], 00:36:08.614 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1205], 99.95th=[ 1205], 00:36:08.614 | 99.99th=[ 1205] 00:36:08.614 write: IOPS=893, BW=3572KiB/s (3658kB/s)(3576KiB/1001msec); 0 zone resets 00:36:08.614 slat (nsec): min=9530, max=55870, avg=34226.66, stdev=8255.61 00:36:08.614 clat (usec): min=150, max=1618, avg=536.41, stdev=154.30 00:36:08.614 lat (usec): min=163, max=1654, avg=570.64, stdev=156.33 00:36:08.614 clat percentiles (usec): 00:36:08.614 | 1.00th=[ 233], 5.00th=[ 285], 10.00th=[ 334], 20.00th=[ 408], 00:36:08.614 | 30.00th=[ 453], 40.00th=[ 494], 50.00th=[ 537], 60.00th=[ 570], 00:36:08.614 | 70.00th=[ 619], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 766], 00:36:08.614 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 1614], 99.95th=[ 1614], 00:36:08.614 | 99.99th=[ 1614] 00:36:08.614 bw ( KiB/s): min= 4096, max= 4096, per=39.41%, avg=4096.00, stdev= 0.00, samples=1 00:36:08.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:08.614 lat (usec) : 250=1.00%, 500=25.60%, 750=35.49%, 1000=30.01% 00:36:08.614 lat (msec) : 2=7.89% 00:36:08.614 cpu : usr=4.00%, sys=4.80%, ctx=1408, majf=0, minf=1 00:36:08.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:08.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.614 issued rwts: total=512,894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:08.614 job2: (groupid=0, jobs=1): err= 0: pid=4155370: Wed Nov 27 10:08:23 2024 00:36:08.614 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:08.614 slat (nsec): min=8572, max=57403, avg=27773.95, stdev=3047.91 00:36:08.614 clat (usec): min=701, max=1232, avg=985.80, stdev=94.42 00:36:08.614 lat (usec): min=729, max=1260, avg=1013.57, stdev=94.37 00:36:08.614 clat percentiles (usec): 00:36:08.614 | 1.00th=[ 750], 5.00th=[ 824], 10.00th=[ 873], 20.00th=[ 914], 00:36:08.614 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 1004], 00:36:08.614 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1123], 95.00th=[ 1156], 00:36:08.614 | 99.00th=[ 1221], 99.50th=[ 1221], 99.90th=[ 1237], 99.95th=[ 1237], 00:36:08.614 | 99.99th=[ 1237] 00:36:08.614 write: IOPS=760, BW=3041KiB/s (3114kB/s)(3044KiB/1001msec); 0 zone resets 00:36:08.614 slat (nsec): min=10154, max=68235, avg=32478.86, stdev=9884.29 00:36:08.614 clat (usec): min=244, max=1000, avg=584.60, stdev=135.63 00:36:08.614 lat (usec): min=254, max=1036, avg=617.08, stdev=138.57 00:36:08.614 clat percentiles (usec): 00:36:08.614 | 1.00th=[ 322], 5.00th=[ 371], 10.00th=[ 416], 20.00th=[ 465], 00:36:08.614 | 30.00th=[ 506], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 611], 00:36:08.614 | 70.00th=[ 652], 80.00th=[ 701], 90.00th=[ 775], 95.00th=[ 816], 00:36:08.614 | 99.00th=[ 914], 99.50th=[ 963], 99.90th=[ 1004], 99.95th=[ 1004], 00:36:08.614 | 99.99th=[ 1004] 00:36:08.614 bw ( KiB/s): min= 4096, max= 4096, per=39.41%, avg=4096.00, stdev= 0.00, samples=1 00:36:08.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:08.614 lat (usec) : 250=0.08%, 500=17.05%, 750=35.43%, 1000=30.48% 00:36:08.614 lat (msec) : 2=16.97% 00:36:08.614 cpu : usr=1.30%, sys=4.60%, ctx=1275, majf=0, minf=1 00:36:08.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:08.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.614 issued rwts: total=512,761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:08.614 job3: (groupid=0, jobs=1): err= 0: pid=4155376: Wed Nov 27 10:08:23 2024 00:36:08.614 read: IOPS=17, BW=69.8KiB/s (71.5kB/s)(72.0KiB/1031msec) 00:36:08.614 slat (nsec): min=25966, max=26969, avg=26229.39, stdev=237.26 00:36:08.614 clat (usec): min=1143, max=42053, avg=39574.09, stdev=9594.79 00:36:08.614 lat (usec): min=1169, max=42079, avg=39600.32, stdev=9594.85 00:36:08.614 clat percentiles (usec): 00:36:08.614 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41157], 20.00th=[41681], 00:36:08.614 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:36:08.614 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:08.614 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:08.614 | 99.99th=[42206] 00:36:08.614 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:36:08.614 slat (nsec): min=9433, max=65654, avg=29230.33, stdev=11482.98 00:36:08.614 clat (usec): min=216, max=4025, avg=582.61, stdev=247.41 00:36:08.614 lat (usec): min=229, max=4035, avg=611.84, stdev=250.86 00:36:08.614 clat percentiles (usec): 00:36:08.614 | 1.00th=[ 281], 5.00th=[ 310], 10.00th=[ 347], 20.00th=[ 383], 00:36:08.614 | 30.00th=[ 482], 40.00th=[ 529], 50.00th=[ 586], 60.00th=[ 619], 00:36:08.614 | 70.00th=[ 660], 80.00th=[ 717], 90.00th=[ 791], 95.00th=[ 848], 00:36:08.614 | 99.00th=[ 1352], 99.50th=[ 1631], 99.90th=[ 4015], 99.95th=[ 4015], 00:36:08.614 | 99.99th=[ 4015] 00:36:08.614 bw ( KiB/s): min= 4096, max= 4096, per=39.41%, avg=4096.00, stdev= 0.00, samples=1 00:36:08.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:08.614 lat (usec) : 250=0.57%, 500=32.64%, 750=49.43%, 1000=12.45% 00:36:08.614 lat (msec) : 2=1.32%, 4=0.19%, 10=0.19%, 50=3.21% 00:36:08.614 cpu : usr=0.87%, sys=1.26%, ctx=531, majf=0, minf=1 00:36:08.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:08.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.615 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:08.615 00:36:08.615 Run status group 0 (all jobs): 00:36:08.615 READ: bw=4113KiB/s (4211kB/s), 69.8KiB/s-2046KiB/s (71.5kB/s-2095kB/s), io=4240KiB (4342kB), run=1001-1031msec 00:36:08.615 WRITE: bw=10.1MiB/s (10.6MB/s), 1986KiB/s-3572KiB/s (2034kB/s-3658kB/s), io=10.5MiB (11.0MB), run=1001-1031msec 00:36:08.615 00:36:08.615 Disk stats (read/write): 00:36:08.615 nvme0n1: ios=63/512, merge=0/0, ticks=535/211, in_queue=746, util=86.07% 00:36:08.615 nvme0n2: ios=534/600, merge=0/0, ticks=1286/239, in_queue=1525, util=87.84% 00:36:08.615 nvme0n3: ios=553/512, merge=0/0, ticks=1260/296, in_queue=1556, util=94.93% 00:36:08.615 nvme0n4: ios=36/512, merge=0/0, ticks=1410/286, in_queue=1696, util=94.00% 00:36:08.615 10:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:08.615 [global] 00:36:08.615 thread=1 00:36:08.615 invalidate=1 00:36:08.615 rw=randwrite 00:36:08.615 time_based=1 00:36:08.615 runtime=1 00:36:08.615 ioengine=libaio 00:36:08.615 direct=1 00:36:08.615 bs=4096 00:36:08.615 iodepth=1 00:36:08.615 norandommap=0 00:36:08.615 numjobs=1 00:36:08.615 00:36:08.615 verify_dump=1 00:36:08.615 verify_backlog=512 00:36:08.615 verify_state_save=0 00:36:08.615 do_verify=1 00:36:08.615 verify=crc32c-intel 00:36:08.615 [job0] 00:36:08.615 filename=/dev/nvme0n1 00:36:08.615 [job1] 00:36:08.615 filename=/dev/nvme0n2 00:36:08.615 [job2] 00:36:08.615 filename=/dev/nvme0n3 00:36:08.615 [job3] 00:36:08.615 filename=/dev/nvme0n4 00:36:08.615 Could not set queue depth (nvme0n1) 00:36:08.615 Could not set queue depth (nvme0n2) 00:36:08.615 Could not set queue depth (nvme0n3) 00:36:08.615 Could not set queue depth (nvme0n4) 00:36:08.885 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:08.885 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:08.885 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:08.885 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:08.885 fio-3.35 00:36:08.885 Starting 4 threads 00:36:10.274 00:36:10.274 job0: (groupid=0, jobs=1): err= 0: pid=4155797: Wed Nov 27 10:08:25 2024 00:36:10.274 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:10.274 slat (nsec): min=7200, max=56640, avg=27012.67, stdev=2968.62 00:36:10.274 clat (usec): min=747, max=1780, avg=1003.41, stdev=80.43 00:36:10.274 lat (usec): min=774, max=1807, avg=1030.42, stdev=80.39 00:36:10.274 clat percentiles (usec): 00:36:10.274 | 1.00th=[ 816], 5.00th=[ 881], 10.00th=[ 922], 20.00th=[ 955], 00:36:10.274 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 996], 60.00th=[ 1012], 00:36:10.274 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1139], 00:36:10.274 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1778], 99.95th=[ 1778], 00:36:10.274 | 99.99th=[ 1778] 00:36:10.274 write: IOPS=687, BW=2749KiB/s (2815kB/s)(2752KiB/1001msec); 0 zone resets 00:36:10.274 slat (nsec): min=8861, max=75380, avg=30197.31, stdev=9489.46 00:36:10.274 clat (usec): min=243, max=1335, avg=642.27, stdev=122.91 00:36:10.274 lat (usec): min=254, max=1387, avg=672.47, stdev=127.19 00:36:10.274 clat percentiles (usec): 00:36:10.274 | 1.00th=[ 347], 5.00th=[ 429], 10.00th=[ 474], 20.00th=[ 553], 00:36:10.274 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 676], 00:36:10.274 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 816], 00:36:10.274 | 99.00th=[ 898], 99.50th=[ 938], 99.90th=[ 1336], 99.95th=[ 1336], 00:36:10.274 | 99.99th=[ 1336] 00:36:10.274 bw ( KiB/s): min= 4096, max= 4096, per=40.41%, avg=4096.00, stdev= 0.00, samples=1 00:36:10.274 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:10.274 lat (usec) : 250=0.08%, 500=7.42%, 750=40.33%, 1000=31.92% 00:36:10.274 lat (msec) : 2=20.25% 00:36:10.274 cpu : usr=2.00%, sys=5.30%, ctx=1201, majf=0, minf=2 00:36:10.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.274 issued rwts: total=512,688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:10.274 job1: (groupid=0, jobs=1): err= 0: pid=4155816: Wed Nov 27 10:08:25 2024 00:36:10.274 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:10.274 slat (nsec): min=27423, max=60863, avg=28379.77, stdev=2982.85 00:36:10.274 clat (usec): min=609, max=1249, avg=985.15, stdev=88.50 00:36:10.274 lat (usec): min=637, max=1277, avg=1013.53, stdev=88.24 00:36:10.274 clat percentiles (usec): 00:36:10.274 | 1.00th=[ 758], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 922], 00:36:10.274 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1012], 00:36:10.274 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:36:10.274 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1254], 99.95th=[ 1254], 00:36:10.274 | 99.99th=[ 1254] 00:36:10.275 write: IOPS=714, BW=2857KiB/s (2926kB/s)(2860KiB/1001msec); 0 zone resets 00:36:10.275 slat (nsec): min=9067, max=54244, avg=30887.00, stdev=9719.33 00:36:10.275 clat (usec): min=166, max=1052, avg=623.99, stdev=125.22 00:36:10.275 lat (usec): min=178, max=1086, avg=654.87, stdev=129.18 00:36:10.275 clat percentiles (usec): 00:36:10.275 | 1.00th=[ 334], 5.00th=[ 408], 10.00th=[ 461], 20.00th=[ 502], 00:36:10.275 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:36:10.275 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 807], 00:36:10.275 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 1057], 99.95th=[ 1057], 00:36:10.275 | 99.99th=[ 1057] 00:36:10.275 bw ( KiB/s): min= 4096, max= 4096, per=40.41%, avg=4096.00, stdev= 0.00, samples=1 00:36:10.275 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:10.275 lat (usec) : 250=0.16%, 500=11.33%, 750=38.06%, 1000=31.87% 00:36:10.275 lat (msec) : 2=18.58% 00:36:10.275 cpu : usr=3.90%, sys=3.60%, ctx=1229, majf=0, minf=1 00:36:10.275 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.275 issued rwts: total=512,715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.275 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:10.275 job2: (groupid=0, jobs=1): err= 0: pid=4155837: Wed Nov 27 10:08:25 2024 00:36:10.275 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:10.275 slat (nsec): min=25926, max=45021, avg=27110.79, stdev=1823.51 00:36:10.275 clat (usec): min=665, max=1396, avg=987.01, stdev=99.83 00:36:10.275 lat (usec): min=692, max=1423, avg=1014.12, stdev=99.87 00:36:10.275 clat percentiles (usec): 00:36:10.275 | 1.00th=[ 766], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 914], 00:36:10.275 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:36:10.275 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1156], 00:36:10.275 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1401], 99.95th=[ 1401], 00:36:10.275 | 99.99th=[ 1401] 00:36:10.275 write: IOPS=676, BW=2705KiB/s (2770kB/s)(2708KiB/1001msec); 0 zone resets 00:36:10.275 slat (nsec): min=9674, max=54454, avg=31568.20, stdev=8562.09 00:36:10.275 clat (usec): min=171, max=1180, avg=660.26, stdev=140.23 00:36:10.275 lat (usec): min=205, max=1213, avg=691.83, stdev=142.92 00:36:10.275 clat percentiles (usec): 00:36:10.275 | 1.00th=[ 310], 5.00th=[ 420], 10.00th=[ 469], 20.00th=[ 553], 00:36:10.275 | 30.00th=[ 594], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 701], 00:36:10.275 | 70.00th=[ 734], 80.00th=[ 775], 90.00th=[ 840], 95.00th=[ 881], 00:36:10.275 | 99.00th=[ 988], 99.50th=[ 1004], 99.90th=[ 1188], 99.95th=[ 1188], 00:36:10.275 | 99.99th=[ 1188] 00:36:10.275 bw ( KiB/s): min= 4096, max= 4096, per=40.41%, avg=4096.00, stdev= 0.00, samples=1 00:36:10.275 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:10.275 lat (usec) : 250=0.17%, 500=7.15%, 750=35.58%, 1000=38.35% 00:36:10.275 lat (msec) : 2=18.76% 00:36:10.275 cpu : usr=1.50%, sys=4.10%, ctx=1192, majf=0, minf=1 00:36:10.275 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.275 issued rwts: total=512,677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.275 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:10.275 job3: (groupid=0, jobs=1): err= 0: pid=4155843: Wed Nov 27 10:08:25 2024 00:36:10.275 read: IOPS=18, BW=74.3KiB/s (76.1kB/s)(76.0KiB/1023msec) 00:36:10.275 slat (nsec): min=25942, max=33729, avg=26796.89, stdev=1829.25 00:36:10.275 clat (usec): min=886, max=42073, avg=37456.95, stdev=12828.48 00:36:10.275 lat (usec): min=916, max=42100, avg=37483.74, stdev=12828.15 00:36:10.275 clat percentiles (usec): 00:36:10.275 | 1.00th=[ 889], 5.00th=[ 889], 10.00th=[ 1254], 20.00th=[41157], 00:36:10.275 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:36:10.275 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:10.275 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:10.275 | 99.99th=[42206] 00:36:10.275 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:36:10.275 slat (nsec): min=9754, max=52984, avg=29241.51, stdev=9823.00 00:36:10.275 clat (usec): min=236, max=2147, avg=563.05, stdev=165.90 00:36:10.275 lat (usec): min=247, max=2181, avg=592.29, stdev=169.54 00:36:10.275 clat percentiles (usec): 00:36:10.275 | 1.00th=[ 258], 5.00th=[ 330], 10.00th=[ 363], 20.00th=[ 424], 00:36:10.275 | 30.00th=[ 478], 40.00th=[ 506], 50.00th=[ 545], 60.00th=[ 594], 00:36:10.275 | 70.00th=[ 652], 80.00th=[ 701], 90.00th=[ 775], 95.00th=[ 807], 00:36:10.275 | 99.00th=[ 922], 99.50th=[ 971], 99.90th=[ 2147], 99.95th=[ 2147], 00:36:10.275 | 99.99th=[ 2147] 00:36:10.275 bw ( KiB/s): min= 4096, max= 4096, per=40.41%, avg=4096.00, stdev= 0.00, samples=1 00:36:10.275 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:10.275 lat (usec) : 250=0.56%, 500=34.46%, 750=49.72%, 1000=11.68% 00:36:10.275 lat (msec) : 2=0.19%, 4=0.19%, 50=3.20% 00:36:10.275 cpu : usr=0.59%, sys=1.66%, ctx=532, majf=0, minf=1 00:36:10.275 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.275 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.275 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:10.275 00:36:10.275 Run status group 0 (all jobs): 00:36:10.275 READ: bw=6080KiB/s (6226kB/s), 74.3KiB/s-2046KiB/s (76.1kB/s-2095kB/s), io=6220KiB (6369kB), run=1001-1023msec 00:36:10.275 WRITE: bw=9.90MiB/s (10.4MB/s), 2002KiB/s-2857KiB/s (2050kB/s-2926kB/s), io=10.1MiB (10.6MB), run=1001-1023msec 00:36:10.275 00:36:10.275 Disk stats (read/write): 00:36:10.275 nvme0n1: ios=519/512, merge=0/0, ticks=489/264, in_queue=753, util=86.97% 00:36:10.275 nvme0n2: ios=523/512, merge=0/0, ticks=614/264, in_queue=878, util=93.88% 00:36:10.275 nvme0n3: ios=491/512, merge=0/0, ticks=1084/326, in_queue=1410, util=100.00% 00:36:10.275 nvme0n4: ios=37/512, merge=0/0, ticks=1455/279, in_queue=1734, util=96.47% 00:36:10.275 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:36:10.275 [global] 00:36:10.275 thread=1 00:36:10.275 invalidate=1 00:36:10.275 rw=write 00:36:10.275 time_based=1 00:36:10.275 runtime=1 00:36:10.275 ioengine=libaio 00:36:10.275 direct=1 00:36:10.275 bs=4096 00:36:10.275 iodepth=128 00:36:10.275 norandommap=0 00:36:10.275 numjobs=1 00:36:10.275 00:36:10.275 verify_dump=1 00:36:10.275 verify_backlog=512 00:36:10.275 verify_state_save=0 00:36:10.275 do_verify=1 00:36:10.275 verify=crc32c-intel 00:36:10.275 [job0] 00:36:10.275 filename=/dev/nvme0n1 00:36:10.275 [job1] 00:36:10.275 filename=/dev/nvme0n2 00:36:10.275 [job2] 00:36:10.275 filename=/dev/nvme0n3 00:36:10.275 [job3] 00:36:10.275 filename=/dev/nvme0n4 00:36:10.275 Could not set queue depth (nvme0n1) 00:36:10.275 Could not set queue depth (nvme0n2) 00:36:10.275 Could not set queue depth (nvme0n3) 00:36:10.275 Could not set queue depth (nvme0n4) 00:36:10.536 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:10.536 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:10.536 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:10.536 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:10.536 fio-3.35 00:36:10.536 Starting 4 threads 00:36:11.925 00:36:11.925 job0: (groupid=0, jobs=1): err= 0: pid=4156263: Wed Nov 27 10:08:27 2024 00:36:11.925 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:36:11.925 slat (nsec): min=876, max=14080k, avg=104744.88, stdev=755052.99 00:36:11.925 clat (usec): min=1627, max=78723, avg=13141.66, stdev=8277.45 00:36:11.925 lat (usec): min=1647, max=78747, avg=13246.40, stdev=8360.89 00:36:11.925 clat percentiles (usec): 00:36:11.925 | 1.00th=[ 5342], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 7504], 00:36:11.925 | 30.00th=[ 8848], 40.00th=[10159], 50.00th=[11076], 60.00th=[12518], 00:36:11.925 | 70.00th=[14746], 80.00th=[16450], 90.00th=[21890], 95.00th=[23462], 00:36:11.925 | 99.00th=[56361], 99.50th=[68682], 99.90th=[79168], 99.95th=[79168], 00:36:11.925 | 99.99th=[79168] 00:36:11.925 write: IOPS=4286, BW=16.7MiB/s (17.6MB/s)(16.9MiB/1009msec); 0 zone resets 00:36:11.925 slat (nsec): min=1654, max=9173.7k, avg=105116.12, stdev=694328.39 00:36:11.925 clat (usec): min=1243, max=94323, avg=17159.70, stdev=21898.42 00:36:11.925 lat (usec): min=1253, max=94334, avg=17264.82, stdev=22044.50 00:36:11.925 clat percentiles (usec): 00:36:11.925 | 1.00th=[ 4293], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6390], 00:36:11.925 | 30.00th=[ 6915], 40.00th=[ 8455], 50.00th=[ 9241], 60.00th=[10421], 00:36:11.925 | 70.00th=[13829], 80.00th=[15139], 90.00th=[56886], 95.00th=[81265], 00:36:11.925 | 99.00th=[89654], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:36:11.925 | 99.99th=[93848] 00:36:11.925 bw ( KiB/s): min=14480, max=19104, per=19.57%, avg=16792.00, stdev=3269.66, samples=2 00:36:11.925 iops : min= 3620, max= 4776, avg=4198.00, stdev=817.42, samples=2 00:36:11.925 lat (msec) : 2=0.13%, 4=0.30%, 10=48.25%, 20=38.01%, 50=7.43% 00:36:11.925 lat (msec) : 100=5.88% 00:36:11.925 cpu : usr=2.98%, sys=4.46%, ctx=321, majf=0, minf=1 00:36:11.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:36:11.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:11.925 issued rwts: total=4096,4325,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:11.925 job1: (groupid=0, jobs=1): err= 0: pid=4156283: Wed Nov 27 10:08:27 2024 00:36:11.925 read: IOPS=3543, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1007msec) 00:36:11.925 slat (nsec): min=893, max=16130k, avg=114651.97, stdev=867446.47 00:36:11.925 clat (usec): min=1345, max=70180, avg=15726.39, stdev=8550.24 00:36:11.925 lat (usec): min=1830, max=70181, avg=15841.04, stdev=8621.47 00:36:11.925 clat percentiles (usec): 00:36:11.925 | 1.00th=[ 4015], 5.00th=[ 5342], 10.00th=[ 6849], 20.00th=[ 7635], 00:36:11.925 | 30.00th=[ 8979], 40.00th=[12387], 50.00th=[16450], 60.00th=[17695], 00:36:11.925 | 70.00th=[19268], 80.00th=[21103], 90.00th=[25560], 95.00th=[31589], 00:36:11.925 | 99.00th=[40109], 99.50th=[49546], 99.90th=[69731], 99.95th=[69731], 00:36:11.925 | 99.99th=[69731] 00:36:11.925 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:36:11.925 slat (nsec): min=1561, max=28125k, avg=159893.74, stdev=1100968.44 00:36:11.925 clat (msec): min=3, max=113, avg=19.92, stdev=19.26 00:36:11.925 lat (msec): min=3, max=113, avg=20.08, stdev=19.38 00:36:11.925 clat percentiles (msec): 00:36:11.925 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 8], 00:36:11.925 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 17], 60.00th=[ 20], 00:36:11.925 | 70.00th=[ 22], 80.00th=[ 26], 90.00th=[ 32], 95.00th=[ 69], 00:36:11.925 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 114], 99.95th=[ 114], 00:36:11.925 | 99.99th=[ 114] 00:36:11.925 bw ( KiB/s): min=12288, max=16384, per=16.71%, avg=14336.00, stdev=2896.31, samples=2 00:36:11.925 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:36:11.925 lat (msec) : 2=0.22%, 4=1.97%, 10=34.28%, 20=33.49%, 50=26.05% 00:36:11.925 lat (msec) : 100=3.20%, 250=0.78% 00:36:11.925 cpu : usr=1.79%, sys=2.19%, ctx=312, majf=0, minf=1 00:36:11.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:36:11.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:11.925 issued rwts: total=3568,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:11.925 job2: (groupid=0, jobs=1): err= 0: pid=4156310: Wed Nov 27 10:08:27 2024 00:36:11.926 read: IOPS=8761, BW=34.2MiB/s (35.9MB/s)(34.7MiB/1014msec) 00:36:11.926 slat (nsec): min=960, max=9357.2k, avg=56969.07, stdev=444425.09 00:36:11.926 clat (usec): min=2928, max=25549, avg=7720.33, stdev=2608.29 00:36:11.926 lat (usec): min=2932, max=25551, avg=7777.30, stdev=2626.95 00:36:11.926 clat percentiles (usec): 00:36:11.926 | 1.00th=[ 3621], 5.00th=[ 4883], 10.00th=[ 5342], 20.00th=[ 5800], 00:36:11.926 | 30.00th=[ 6456], 40.00th=[ 6783], 50.00th=[ 7439], 60.00th=[ 7701], 00:36:11.926 | 70.00th=[ 8094], 80.00th=[ 8979], 90.00th=[10552], 95.00th=[12518], 00:36:11.926 | 99.00th=[16188], 99.50th=[24249], 99.90th=[25560], 99.95th=[25560], 00:36:11.926 | 99.99th=[25560] 00:36:11.926 write: IOPS=9088, BW=35.5MiB/s (37.2MB/s)(36.0MiB/1014msec); 0 zone resets 00:36:11.926 slat (nsec): min=1644, max=7933.0k, avg=49521.84, stdev=348518.99 00:36:11.926 clat (usec): min=1171, max=16989, avg=6517.10, stdev=1941.00 00:36:11.926 lat (usec): min=1183, max=16999, avg=6566.62, stdev=1947.43 00:36:11.926 clat percentiles (usec): 00:36:11.926 | 1.00th=[ 2868], 5.00th=[ 3818], 10.00th=[ 4113], 20.00th=[ 4752], 00:36:11.926 | 30.00th=[ 5735], 40.00th=[ 6259], 50.00th=[ 6456], 60.00th=[ 6587], 00:36:11.926 | 70.00th=[ 6783], 80.00th=[ 7963], 90.00th=[ 8979], 95.00th=[ 9896], 00:36:11.926 | 99.00th=[12125], 99.50th=[14353], 99.90th=[15926], 99.95th=[15926], 00:36:11.926 | 99.99th=[16909] 00:36:11.926 bw ( KiB/s): min=36864, max=36864, per=42.96%, avg=36864.00, stdev= 0.00, samples=2 00:36:11.926 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:36:11.926 lat (msec) : 2=0.15%, 4=5.25%, 10=85.72%, 20=8.60%, 50=0.28% 00:36:11.926 cpu : usr=6.52%, sys=8.19%, ctx=588, majf=0, minf=1 00:36:11.926 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:36:11.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:11.926 issued rwts: total=8884,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:11.926 job3: (groupid=0, jobs=1): err= 0: pid=4156321: Wed Nov 27 10:08:27 2024 00:36:11.926 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:36:11.926 slat (nsec): min=999, max=15118k, avg=105805.76, stdev=764112.29 00:36:11.926 clat (usec): min=4310, max=72602, avg=12343.96, stdev=6089.10 00:36:11.926 lat (usec): min=4318, max=72611, avg=12449.77, stdev=6183.82 00:36:11.926 clat percentiles (usec): 00:36:11.926 | 1.00th=[ 5342], 5.00th=[ 7832], 10.00th=[ 8455], 20.00th=[ 9372], 00:36:11.926 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10552], 60.00th=[11469], 00:36:11.926 | 70.00th=[12780], 80.00th=[14353], 90.00th=[16450], 95.00th=[22152], 00:36:11.926 | 99.00th=[37487], 99.50th=[52167], 99.90th=[72877], 99.95th=[72877], 00:36:11.926 | 99.99th=[72877] 00:36:11.926 write: IOPS=4597, BW=18.0MiB/s (18.8MB/s)(18.1MiB/1007msec); 0 zone resets 00:36:11.926 slat (nsec): min=1684, max=9872.3k, avg=105771.97, stdev=630376.76 00:36:11.926 clat (usec): min=1177, max=77346, avg=15294.52, stdev=14246.52 00:36:11.926 lat (usec): min=1189, max=77356, avg=15400.29, stdev=14334.73 00:36:11.926 clat percentiles (usec): 00:36:11.926 | 1.00th=[ 4948], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 7308], 00:36:11.926 | 30.00th=[ 7898], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[11731], 00:36:11.926 | 70.00th=[14746], 80.00th=[17957], 90.00th=[30016], 95.00th=[57410], 00:36:11.926 | 99.00th=[73925], 99.50th=[74974], 99.90th=[77071], 99.95th=[77071], 00:36:11.926 | 99.99th=[77071] 00:36:11.926 bw ( KiB/s): min=17016, max=19848, per=21.48%, avg=18432.00, stdev=2002.53, samples=2 00:36:11.926 iops : min= 4254, max= 4962, avg=4608.00, stdev=500.63, samples=2 00:36:11.926 lat (msec) : 2=0.03%, 4=0.13%, 10=42.81%, 20=45.44%, 50=8.49% 00:36:11.926 lat (msec) : 100=3.10% 00:36:11.926 cpu : usr=3.58%, sys=5.07%, ctx=329, majf=0, minf=2 00:36:11.926 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:36:11.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:11.926 issued rwts: total=4608,4630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:11.926 00:36:11.926 Run status group 0 (all jobs): 00:36:11.926 READ: bw=81.5MiB/s (85.5MB/s), 13.8MiB/s-34.2MiB/s (14.5MB/s-35.9MB/s), io=82.6MiB (86.7MB), run=1007-1014msec 00:36:11.926 WRITE: bw=83.8MiB/s (87.9MB/s), 13.9MiB/s-35.5MiB/s (14.6MB/s-37.2MB/s), io=85.0MiB (89.1MB), run=1007-1014msec 00:36:11.926 00:36:11.926 Disk stats (read/write): 00:36:11.926 nvme0n1: ios=3098/3079, merge=0/0, ticks=33349/53815, in_queue=87164, util=82.26% 00:36:11.926 nvme0n2: ios=3119/3102, merge=0/0, ticks=23757/26302, in_queue=50059, util=90.70% 00:36:11.926 nvme0n3: ios=7150/7168, merge=0/0, ticks=51641/43962, in_queue=95603, util=95.39% 00:36:11.926 nvme0n4: ios=3615/3671, merge=0/0, ticks=42472/53532, in_queue=96004, util=98.88% 00:36:11.926 10:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:36:11.926 [global] 00:36:11.926 thread=1 00:36:11.926 invalidate=1 00:36:11.926 rw=randwrite 00:36:11.926 time_based=1 00:36:11.926 runtime=1 00:36:11.926 ioengine=libaio 00:36:11.926 direct=1 00:36:11.926 bs=4096 00:36:11.926 iodepth=128 00:36:11.926 norandommap=0 00:36:11.926 numjobs=1 00:36:11.926 00:36:11.926 verify_dump=1 00:36:11.926 verify_backlog=512 00:36:11.926 verify_state_save=0 00:36:11.926 do_verify=1 00:36:11.926 verify=crc32c-intel 00:36:11.926 [job0] 00:36:11.926 filename=/dev/nvme0n1 00:36:11.926 [job1] 00:36:11.926 filename=/dev/nvme0n2 00:36:11.926 [job2] 00:36:11.926 filename=/dev/nvme0n3 00:36:11.926 [job3] 00:36:11.926 filename=/dev/nvme0n4 00:36:11.926 Could not set queue depth (nvme0n1) 00:36:11.926 Could not set queue depth (nvme0n2) 00:36:11.926 Could not set queue depth (nvme0n3) 00:36:11.926 Could not set queue depth (nvme0n4) 00:36:12.193 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:12.193 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:12.193 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:12.193 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:12.193 fio-3.35 00:36:12.193 Starting 4 threads 00:36:13.576 00:36:13.576 job0: (groupid=0, jobs=1): err= 0: pid=4156737: Wed Nov 27 10:08:28 2024 00:36:13.576 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:36:13.576 slat (nsec): min=927, max=20018k, avg=100662.62, stdev=821348.89 00:36:13.576 clat (usec): min=3499, max=45578, avg=12698.49, stdev=7021.20 00:36:13.576 lat (usec): min=3508, max=45603, avg=12799.15, stdev=7087.77 00:36:13.576 clat percentiles (usec): 00:36:13.576 | 1.00th=[ 3982], 5.00th=[ 6456], 10.00th=[ 7832], 20.00th=[ 8160], 00:36:13.576 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10814], 00:36:13.576 | 70.00th=[13304], 80.00th=[17695], 90.00th=[21627], 95.00th=[27657], 00:36:13.576 | 99.00th=[36963], 99.50th=[37487], 99.90th=[42730], 99.95th=[42730], 00:36:13.576 | 99.99th=[45351] 00:36:13.576 write: IOPS=4578, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1007msec); 0 zone resets 00:36:13.576 slat (nsec): min=1500, max=13547k, avg=98065.54, stdev=623236.67 00:36:13.576 clat (usec): min=1037, max=38726, avg=15023.95, stdev=8480.54 00:36:13.576 lat (usec): min=1134, max=38730, avg=15122.02, stdev=8543.29 00:36:13.576 clat percentiles (usec): 00:36:13.576 | 1.00th=[ 1942], 5.00th=[ 4146], 10.00th=[ 5145], 20.00th=[ 7046], 00:36:13.576 | 30.00th=[ 9110], 40.00th=[10552], 50.00th=[13173], 60.00th=[15533], 00:36:13.576 | 70.00th=[20317], 80.00th=[23725], 90.00th=[27395], 95.00th=[29492], 00:36:13.576 | 99.00th=[35390], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:36:13.576 | 99.99th=[38536] 00:36:13.576 bw ( KiB/s): min=15056, max=21808, per=18.24%, avg=18432.00, stdev=4774.38, samples=2 00:36:13.576 iops : min= 3764, max= 5452, avg=4608.00, stdev=1193.60, samples=2 00:36:13.576 lat (msec) : 2=0.59%, 4=2.34%, 10=41.87%, 20=33.54%, 50=21.66% 00:36:13.576 cpu : usr=2.39%, sys=5.77%, ctx=427, majf=0, minf=1 00:36:13.576 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:36:13.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:13.576 issued rwts: total=4608,4611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:13.576 job1: (groupid=0, jobs=1): err= 0: pid=4156746: Wed Nov 27 10:08:28 2024 00:36:13.576 read: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec) 00:36:13.576 slat (nsec): min=975, max=8423.8k, avg=59612.47, stdev=472164.07 00:36:13.576 clat (usec): min=2580, max=18804, avg=8239.88, stdev=2561.29 00:36:13.576 lat (usec): min=2585, max=18807, avg=8299.49, stdev=2585.21 00:36:13.576 clat percentiles (usec): 00:36:13.576 | 1.00th=[ 4424], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6063], 00:36:13.576 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7373], 60.00th=[ 8455], 00:36:13.576 | 70.00th=[ 9241], 80.00th=[10290], 90.00th=[11994], 95.00th=[13042], 00:36:13.576 | 99.00th=[15795], 99.50th=[16319], 99.90th=[18220], 99.95th=[18220], 00:36:13.576 | 99.99th=[18744] 00:36:13.576 write: IOPS=8442, BW=33.0MiB/s (34.6MB/s)(33.2MiB/1006msec); 0 zone resets 00:36:13.576 slat (nsec): min=1560, max=9618.9k, avg=55717.04, stdev=407386.01 00:36:13.576 clat (usec): min=1149, max=20391, avg=7085.54, stdev=2313.96 00:36:13.576 lat (usec): min=1159, max=20408, avg=7141.26, stdev=2330.50 00:36:13.576 clat percentiles (usec): 00:36:13.576 | 1.00th=[ 2704], 5.00th=[ 4080], 10.00th=[ 4490], 20.00th=[ 5407], 00:36:13.576 | 30.00th=[ 6128], 40.00th=[ 6652], 50.00th=[ 6915], 60.00th=[ 7111], 00:36:13.576 | 70.00th=[ 7308], 80.00th=[ 8455], 90.00th=[ 9896], 95.00th=[11600], 00:36:13.576 | 99.00th=[14746], 99.50th=[17957], 99.90th=[20317], 99.95th=[20317], 00:36:13.576 | 99.99th=[20317] 00:36:13.576 bw ( KiB/s): min=30056, max=36864, per=33.11%, avg=33460.00, stdev=4813.98, samples=2 00:36:13.576 iops : min= 7514, max= 9216, avg=8365.00, stdev=1203.50, samples=2 00:36:13.577 lat (msec) : 2=0.22%, 4=1.93%, 10=82.60%, 20=15.01%, 50=0.24% 00:36:13.577 cpu : usr=6.17%, sys=6.97%, ctx=623, majf=0, minf=1 00:36:13.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:36:13.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:13.577 issued rwts: total=8192,8493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:13.577 job2: (groupid=0, jobs=1): err= 0: pid=4156753: Wed Nov 27 10:08:28 2024 00:36:13.577 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:36:13.577 slat (nsec): min=957, max=10721k, avg=74238.79, stdev=468750.00 00:36:13.577 clat (usec): min=5259, max=28819, avg=9334.39, stdev=1927.70 00:36:13.577 lat (usec): min=5264, max=28828, avg=9408.63, stdev=1976.85 00:36:13.577 clat percentiles (usec): 00:36:13.577 | 1.00th=[ 5800], 5.00th=[ 7373], 10.00th=[ 7832], 20.00th=[ 8291], 00:36:13.577 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:36:13.577 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11338], 95.00th=[12518], 00:36:13.577 | 99.00th=[15795], 99.50th=[16909], 99.90th=[24773], 99.95th=[28705], 00:36:13.577 | 99.99th=[28705] 00:36:13.577 write: IOPS=6620, BW=25.9MiB/s (27.1MB/s)(25.9MiB/1002msec); 0 zone resets 00:36:13.577 slat (nsec): min=1595, max=14583k, avg=77065.11, stdev=469486.75 00:36:13.577 clat (usec): min=674, max=41077, avg=10453.52, stdev=5777.87 00:36:13.577 lat (usec): min=4461, max=41084, avg=10530.59, stdev=5817.13 00:36:13.577 clat percentiles (usec): 00:36:13.577 | 1.00th=[ 5407], 5.00th=[ 7242], 10.00th=[ 7767], 20.00th=[ 8094], 00:36:13.577 | 30.00th=[ 8160], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8848], 00:36:13.577 | 70.00th=[ 9241], 80.00th=[10683], 90.00th=[15139], 95.00th=[24773], 00:36:13.577 | 99.00th=[38011], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 00:36:13.577 | 99.99th=[41157] 00:36:13.577 bw ( KiB/s): min=24496, max=27552, per=25.75%, avg=26024.00, stdev=2160.92, samples=2 00:36:13.577 iops : min= 6124, max= 6888, avg=6506.00, stdev=540.23, samples=2 00:36:13.577 lat (usec) : 750=0.01% 00:36:13.577 lat (msec) : 10=77.90%, 20=18.67%, 50=3.42% 00:36:13.577 cpu : usr=4.00%, sys=7.09%, ctx=549, majf=0, minf=1 00:36:13.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:36:13.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:13.577 issued rwts: total=6144,6634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:13.577 job3: (groupid=0, jobs=1): err= 0: pid=4156759: Wed Nov 27 10:08:28 2024 00:36:13.577 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:36:13.577 slat (nsec): min=960, max=13142k, avg=86882.82, stdev=638487.09 00:36:13.577 clat (usec): min=4056, max=41781, avg=11168.92, stdev=5176.10 00:36:13.577 lat (usec): min=4062, max=45063, avg=11255.81, stdev=5226.18 00:36:13.577 clat percentiles (usec): 00:36:13.577 | 1.00th=[ 5800], 5.00th=[ 7177], 10.00th=[ 7898], 20.00th=[ 8356], 00:36:13.577 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9896], 00:36:13.577 | 70.00th=[11207], 80.00th=[12911], 90.00th=[19268], 95.00th=[20579], 00:36:13.577 | 99.00th=[32637], 99.50th=[39584], 99.90th=[41681], 99.95th=[41681], 00:36:13.577 | 99.99th=[41681] 00:36:13.577 write: IOPS=5694, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1002msec); 0 zone resets 00:36:13.577 slat (nsec): min=1594, max=10714k, avg=81700.46, stdev=543743.74 00:36:13.577 clat (usec): min=851, max=49684, avg=11171.49, stdev=7317.09 00:36:13.577 lat (usec): min=861, max=49689, avg=11253.19, stdev=7369.55 00:36:13.577 clat percentiles (usec): 00:36:13.577 | 1.00th=[ 4817], 5.00th=[ 6128], 10.00th=[ 7504], 20.00th=[ 7898], 00:36:13.577 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 9110], 00:36:13.577 | 70.00th=[ 9896], 80.00th=[11600], 90.00th=[19530], 95.00th=[30278], 00:36:13.577 | 99.00th=[43254], 99.50th=[46924], 99.90th=[49021], 99.95th=[49021], 00:36:13.577 | 99.99th=[49546] 00:36:13.577 bw ( KiB/s): min=21048, max=24008, per=22.29%, avg=22528.00, stdev=2093.04, samples=2 00:36:13.577 iops : min= 5262, max= 6002, avg=5632.00, stdev=523.26, samples=2 00:36:13.577 lat (usec) : 1000=0.04% 00:36:13.577 lat (msec) : 2=0.08%, 4=0.09%, 10=66.21%, 20=25.91%, 50=7.67% 00:36:13.577 cpu : usr=4.00%, sys=6.29%, ctx=397, majf=0, minf=1 00:36:13.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:36:13.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:13.577 issued rwts: total=5632,5706,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:13.577 00:36:13.577 Run status group 0 (all jobs): 00:36:13.577 READ: bw=95.3MiB/s (100.0MB/s), 17.9MiB/s-31.8MiB/s (18.7MB/s-33.4MB/s), io=96.0MiB (101MB), run=1002-1007msec 00:36:13.577 WRITE: bw=98.7MiB/s (103MB/s), 17.9MiB/s-33.0MiB/s (18.8MB/s-34.6MB/s), io=99.4MiB (104MB), run=1002-1007msec 00:36:13.577 00:36:13.577 Disk stats (read/write): 00:36:13.577 nvme0n1: ios=4146/4203, merge=0/0, ticks=43690/57778, in_queue=101468, util=87.07% 00:36:13.577 nvme0n2: ios=7167/7175, merge=0/0, ticks=53709/47524, in_queue=101233, util=87.36% 00:36:13.577 nvme0n3: ios=5145/5255, merge=0/0, ticks=24709/26237, in_queue=50946, util=96.20% 00:36:13.577 nvme0n4: ios=4253/4608, merge=0/0, ticks=25452/27763, in_queue=53215, util=100.00% 00:36:13.577 10:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:36:13.577 10:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4157060 00:36:13.577 10:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:36:13.577 10:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:36:13.577 [global] 00:36:13.577 thread=1 00:36:13.577 invalidate=1 00:36:13.577 rw=read 00:36:13.577 time_based=1 00:36:13.577 runtime=10 00:36:13.577 ioengine=libaio 00:36:13.577 direct=1 00:36:13.577 bs=4096 00:36:13.577 iodepth=1 00:36:13.577 norandommap=1 00:36:13.577 numjobs=1 00:36:13.577 00:36:13.577 [job0] 00:36:13.577 filename=/dev/nvme0n1 00:36:13.577 [job1] 00:36:13.577 filename=/dev/nvme0n2 00:36:13.577 [job2] 00:36:13.577 filename=/dev/nvme0n3 00:36:13.577 [job3] 00:36:13.577 filename=/dev/nvme0n4 00:36:13.577 Could not set queue depth (nvme0n1) 00:36:13.577 Could not set queue depth (nvme0n2) 00:36:13.577 Could not set queue depth (nvme0n3) 00:36:13.577 Could not set queue depth (nvme0n4) 00:36:13.837 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:13.837 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:13.837 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:13.837 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:13.837 fio-3.35 00:36:13.837 Starting 4 threads 00:36:16.492 10:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:36:16.810 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:36:16.810 fio: pid=4157256, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:16.810 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:36:16.810 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2478080, buflen=4096 00:36:16.810 fio: pid=4157254, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:16.810 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:16.810 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:36:17.076 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:17.076 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:36:17.076 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=4259840, buflen=4096 00:36:17.076 fio: pid=4157250, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:17.338 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12296192, buflen=4096 00:36:17.338 fio: pid=4157251, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:17.338 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:17.338 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:36:17.338 00:36:17.338 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4157250: Wed Nov 27 10:08:32 2024 00:36:17.338 read: IOPS=349, BW=1398KiB/s (1431kB/s)(4160KiB/2976msec) 00:36:17.338 slat (usec): min=6, max=256, avg=25.41, stdev=11.05 00:36:17.338 clat (usec): min=300, max=42176, avg=2808.70, stdev=8685.53 00:36:17.338 lat (usec): min=326, max=42195, avg=2834.10, stdev=8687.52 00:36:17.338 clat percentiles (usec): 00:36:17.338 | 1.00th=[ 465], 5.00th=[ 578], 10.00th=[ 635], 20.00th=[ 725], 00:36:17.338 | 30.00th=[ 799], 40.00th=[ 848], 50.00th=[ 898], 60.00th=[ 947], 00:36:17.338 | 70.00th=[ 996], 80.00th=[ 1074], 90.00th=[ 1139], 95.00th=[ 1401], 00:36:17.338 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:17.338 | 99.99th=[42206] 00:36:17.338 bw ( KiB/s): min= 96, max= 4824, per=27.45%, avg=1646.40, stdev=2066.99, samples=5 00:36:17.338 iops : min= 24, max= 1206, avg=411.60, stdev=516.75, samples=5 00:36:17.338 lat (usec) : 500=1.92%, 750=20.46%, 1000=48.03% 00:36:17.338 lat (msec) : 2=24.69%, 4=0.10%, 50=4.71% 00:36:17.338 cpu : usr=0.37%, sys=1.04%, ctx=1044, majf=0, minf=1 00:36:17.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.338 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.338 issued rwts: total=1041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:17.338 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4157251: Wed Nov 27 10:08:32 2024 00:36:17.338 read: IOPS=954, BW=3818KiB/s (3910kB/s)(11.7MiB/3145msec) 00:36:17.338 slat (usec): min=6, max=26984, avg=61.12, stdev=803.20 00:36:17.338 clat (usec): min=284, max=7758, avg=970.86, stdev=183.95 00:36:17.338 lat (usec): min=311, max=28022, avg=1032.00, stdev=824.08 00:36:17.338 clat percentiles (usec): 00:36:17.338 | 1.00th=[ 635], 5.00th=[ 775], 10.00th=[ 848], 20.00th=[ 914], 00:36:17.338 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:36:17.338 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:36:17.338 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 2089], 99.95th=[ 5669], 00:36:17.338 | 99.99th=[ 7767] 00:36:17.338 bw ( KiB/s): min= 3355, max= 4072, per=64.43%, avg=3863.17, stdev=255.38, samples=6 00:36:17.338 iops : min= 838, max= 1018, avg=965.67, stdev=64.14, samples=6 00:36:17.338 lat (usec) : 500=0.27%, 750=3.30%, 1000=60.11% 00:36:17.338 lat (msec) : 2=36.16%, 4=0.07%, 10=0.07% 00:36:17.338 cpu : usr=1.56%, sys=3.94%, ctx=3010, majf=0, minf=2 00:36:17.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.338 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.338 issued rwts: total=3003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:17.338 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4157254: Wed Nov 27 10:08:32 2024 00:36:17.338 read: IOPS=219, BW=875KiB/s (896kB/s)(2420KiB/2767msec) 00:36:17.338 slat (nsec): min=24665, max=61334, avg=26166.30, stdev=2252.14 00:36:17.338 clat (usec): min=571, max=41364, avg=4504.16, stdev=11310.58 00:36:17.338 lat (usec): min=596, max=41390, avg=4530.33, stdev=11310.56 00:36:17.338 clat percentiles (usec): 00:36:17.338 | 1.00th=[ 717], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 930], 00:36:17.338 | 30.00th=[ 963], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1045], 00:36:17.338 | 70.00th=[ 1074], 80.00th=[ 1123], 90.00th=[ 1188], 95.00th=[41157], 00:36:17.338 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:17.338 | 99.99th=[41157] 00:36:17.338 bw ( KiB/s): min= 96, max= 3848, per=15.98%, avg=958.40, stdev=1632.76, samples=5 00:36:17.338 iops : min= 24, max= 962, avg=239.60, stdev=408.19, samples=5 00:36:17.338 lat (usec) : 750=1.16%, 1000=41.91% 00:36:17.338 lat (msec) : 2=48.02%, 50=8.75% 00:36:17.338 cpu : usr=0.22%, sys=0.69%, ctx=607, majf=0, minf=2 00:36:17.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.338 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.338 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:17.338 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4157256: Wed Nov 27 10:08:32 2024 00:36:17.338 read: IOPS=26, BW=104KiB/s (106kB/s)(268KiB/2585msec) 00:36:17.338 slat (nsec): min=22520, max=38738, avg=26051.54, stdev=1722.36 00:36:17.338 clat (usec): min=730, max=42130, avg=38218.72, stdev=10970.74 00:36:17.338 lat (usec): min=752, max=42155, avg=38244.76, stdev=10970.17 00:36:17.338 clat percentiles (usec): 00:36:17.338 | 1.00th=[ 734], 5.00th=[ 1074], 10.00th=[40633], 20.00th=[41157], 00:36:17.338 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:36:17.338 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:17.338 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:17.338 | 99.99th=[42206] 00:36:17.338 bw ( KiB/s): min= 96, max= 120, per=1.73%, avg=104.00, stdev=11.31, samples=5 00:36:17.338 iops : min= 24, max= 30, avg=26.00, stdev= 2.83, samples=5 00:36:17.338 lat (usec) : 750=1.47%, 1000=2.94% 00:36:17.338 lat (msec) : 2=2.94%, 50=91.18% 00:36:17.338 cpu : usr=0.12%, sys=0.00%, ctx=68, majf=0, minf=2 00:36:17.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.338 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.338 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:17.338 00:36:17.338 Run status group 0 (all jobs): 00:36:17.338 READ: bw=5996KiB/s (6139kB/s), 104KiB/s-3818KiB/s (106kB/s-3910kB/s), io=18.4MiB (19.3MB), run=2585-3145msec 00:36:17.338 00:36:17.338 Disk stats (read/write): 00:36:17.338 nvme0n1: ios=1037/0, merge=0/0, ticks=2778/0, in_queue=2778, util=94.69% 00:36:17.338 nvme0n2: ios=2967/0, merge=0/0, ticks=2707/0, in_queue=2707, util=92.50% 00:36:17.338 nvme0n3: ios=601/0, merge=0/0, ticks=2555/0, in_queue=2555, util=95.99% 00:36:17.338 nvme0n4: ios=61/0, merge=0/0, ticks=2312/0, in_queue=2312, util=96.06% 00:36:17.338 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:17.338 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:36:17.599 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:17.599 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:36:17.860 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:17.860 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:36:18.121 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:18.121 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:36:18.121 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:36:18.121 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 4157060 00:36:18.121 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:36:18.121 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:18.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:36:18.381 nvmf hotplug test: fio failed as expected 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:18.381 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:18.381 rmmod nvme_tcp 00:36:18.642 rmmod nvme_fabrics 00:36:18.642 rmmod nvme_keyring 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 4153884 ']' 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 4153884 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 4153884 ']' 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 4153884 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4153884 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4153884' 00:36:18.642 killing process with pid 4153884 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 4153884 00:36:18.642 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 4153884 00:36:18.642 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:18.642 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:18.642 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:18.642 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:36:18.642 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:36:18.642 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:36:18.642 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:18.642 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:18.642 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:18.642 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.642 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:18.642 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.184 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:21.184 00:36:21.184 real 0m28.256s 00:36:21.184 user 2m27.861s 00:36:21.184 sys 0m11.915s 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:21.185 ************************************ 00:36:21.185 END TEST nvmf_fio_target 00:36:21.185 ************************************ 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:21.185 ************************************ 00:36:21.185 START TEST nvmf_bdevio 00:36:21.185 ************************************ 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:21.185 * Looking for test storage... 00:36:21.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:21.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.185 --rc genhtml_branch_coverage=1 00:36:21.185 --rc genhtml_function_coverage=1 00:36:21.185 --rc genhtml_legend=1 00:36:21.185 --rc geninfo_all_blocks=1 00:36:21.185 --rc geninfo_unexecuted_blocks=1 00:36:21.185 00:36:21.185 ' 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:21.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.185 --rc genhtml_branch_coverage=1 00:36:21.185 --rc genhtml_function_coverage=1 00:36:21.185 --rc genhtml_legend=1 00:36:21.185 --rc geninfo_all_blocks=1 00:36:21.185 --rc geninfo_unexecuted_blocks=1 00:36:21.185 00:36:21.185 ' 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:21.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.185 --rc genhtml_branch_coverage=1 00:36:21.185 --rc genhtml_function_coverage=1 00:36:21.185 --rc genhtml_legend=1 00:36:21.185 --rc geninfo_all_blocks=1 00:36:21.185 --rc geninfo_unexecuted_blocks=1 00:36:21.185 00:36:21.185 ' 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:21.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.185 --rc genhtml_branch_coverage=1 00:36:21.185 --rc genhtml_function_coverage=1 00:36:21.185 --rc genhtml_legend=1 00:36:21.185 --rc geninfo_all_blocks=1 00:36:21.185 --rc geninfo_unexecuted_blocks=1 00:36:21.185 00:36:21.185 ' 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.185 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:36:21.186 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:29.326 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:29.326 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:29.326 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:29.327 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:29.327 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:29.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:29.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:36:29.327 00:36:29.327 --- 10.0.0.2 ping statistics --- 00:36:29.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.327 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:29.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:29.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:36:29.327 00:36:29.327 --- 10.0.0.1 ping statistics --- 00:36:29.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.327 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=4162283 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 4162283 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 4162283 ']' 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:29.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:29.327 10:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:29.327 [2024-11-27 10:08:44.025992] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:29.327 [2024-11-27 10:08:44.027926] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:36:29.327 [2024-11-27 10:08:44.028014] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:29.327 [2024-11-27 10:08:44.131075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:29.327 [2024-11-27 10:08:44.184104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:29.327 [2024-11-27 10:08:44.184171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:29.327 [2024-11-27 10:08:44.184180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:29.327 [2024-11-27 10:08:44.184187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:29.327 [2024-11-27 10:08:44.184194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:29.327 [2024-11-27 10:08:44.186461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:29.327 [2024-11-27 10:08:44.186702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:29.327 [2024-11-27 10:08:44.186862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:29.327 [2024-11-27 10:08:44.186865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:29.327 [2024-11-27 10:08:44.263705] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:29.327 [2024-11-27 10:08:44.264079] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:29.327 [2024-11-27 10:08:44.264732] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:29.327 [2024-11-27 10:08:44.265174] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:29.327 [2024-11-27 10:08:44.265182] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:29.589 [2024-11-27 10:08:44.903881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:29.589 Malloc0 00:36:29.589 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.590 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:29.590 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.590 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:29.590 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.590 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:29.590 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.590 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:29.590 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.590 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:29.590 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.590 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:29.590 [2024-11-27 10:08:44.996277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:29.590 10:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.590 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:29.590 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:29.590 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:36:29.590 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:36:29.590 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:29.590 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:29.590 { 00:36:29.590 "params": { 00:36:29.590 "name": "Nvme$subsystem", 00:36:29.590 "trtype": "$TEST_TRANSPORT", 00:36:29.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:29.590 "adrfam": "ipv4", 00:36:29.590 "trsvcid": "$NVMF_PORT", 00:36:29.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:29.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:29.590 "hdgst": ${hdgst:-false}, 00:36:29.590 "ddgst": ${ddgst:-false} 00:36:29.590 }, 00:36:29.590 "method": "bdev_nvme_attach_controller" 00:36:29.590 } 00:36:29.590 EOF 00:36:29.590 )") 00:36:29.590 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:36:29.590 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:36:29.590 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:36:29.590 10:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:29.590 "params": { 00:36:29.590 "name": "Nvme1", 00:36:29.590 "trtype": "tcp", 00:36:29.590 "traddr": "10.0.0.2", 00:36:29.590 "adrfam": "ipv4", 00:36:29.590 "trsvcid": "4420", 00:36:29.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:29.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:29.590 "hdgst": false, 00:36:29.590 "ddgst": false 00:36:29.590 }, 00:36:29.590 "method": "bdev_nvme_attach_controller" 00:36:29.590 }' 00:36:29.590 [2024-11-27 10:08:45.054112] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:36:29.590 [2024-11-27 10:08:45.054182] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162633 ] 00:36:29.851 [2024-11-27 10:08:45.145332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:29.851 [2024-11-27 10:08:45.201763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.851 [2024-11-27 10:08:45.201926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:29.851 [2024-11-27 10:08:45.201926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:30.112 I/O targets: 00:36:30.112 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:30.112 00:36:30.112 00:36:30.112 CUnit - A unit testing framework for C - Version 2.1-3 00:36:30.112 http://cunit.sourceforge.net/ 00:36:30.112 00:36:30.112 00:36:30.112 Suite: bdevio tests on: Nvme1n1 00:36:30.112 Test: blockdev write read block ...passed 00:36:30.112 Test: blockdev write zeroes read block ...passed 00:36:30.112 Test: blockdev write zeroes read no split ...passed 00:36:30.112 Test: blockdev write zeroes read split ...passed 00:36:30.112 Test: blockdev write zeroes read split partial ...passed 00:36:30.112 Test: blockdev reset ...[2024-11-27 10:08:45.573165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:36:30.112 [2024-11-27 10:08:45.573267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfc970 (9): Bad file descriptor 00:36:30.373 [2024-11-27 10:08:45.622275] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:36:30.373 passed 00:36:30.373 Test: blockdev write read 8 blocks ...passed 00:36:30.373 Test: blockdev write read size > 128k ...passed 00:36:30.373 Test: blockdev write read invalid size ...passed 00:36:30.373 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:30.373 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:30.373 Test: blockdev write read max offset ...passed 00:36:30.373 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:30.373 Test: blockdev writev readv 8 blocks ...passed 00:36:30.634 Test: blockdev writev readv 30 x 1block ...passed 00:36:30.634 Test: blockdev writev readv block ...passed 00:36:30.634 Test: blockdev writev readv size > 128k ...passed 00:36:30.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:30.635 Test: blockdev comparev and writev ...[2024-11-27 10:08:45.889975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:30.635 [2024-11-27 10:08:45.890025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:30.635 [2024-11-27 10:08:45.890042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:30.635 [2024-11-27 10:08:45.890052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:30.635 [2024-11-27 10:08:45.890697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:30.635 [2024-11-27 10:08:45.890709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:30.635 [2024-11-27 10:08:45.890724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:30.635 [2024-11-27 10:08:45.890732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:30.635 [2024-11-27 10:08:45.891396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:30.635 [2024-11-27 10:08:45.891408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:30.635 [2024-11-27 10:08:45.891422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:30.635 [2024-11-27 10:08:45.891430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:30.635 [2024-11-27 10:08:45.892106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:30.635 [2024-11-27 10:08:45.892118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:30.635 [2024-11-27 10:08:45.892131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:30.635 [2024-11-27 10:08:45.892139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:30.635 passed 00:36:30.635 Test: blockdev nvme passthru rw ...passed 00:36:30.635 Test: blockdev nvme passthru vendor specific ...[2024-11-27 10:08:45.976857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:30.635 [2024-11-27 10:08:45.976873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:30.635 [2024-11-27 10:08:45.977254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:30.635 [2024-11-27 10:08:45.977267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:30.635 [2024-11-27 10:08:45.977646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:30.635 [2024-11-27 10:08:45.977657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:30.635 [2024-11-27 10:08:45.978050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:30.635 [2024-11-27 10:08:45.978062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:30.635 passed 00:36:30.635 Test: blockdev nvme admin passthru ...passed 00:36:30.635 Test: blockdev copy ...passed 00:36:30.635 00:36:30.635 Run Summary: Type Total Ran Passed Failed Inactive 00:36:30.635 suites 1 1 n/a 0 0 00:36:30.635 tests 23 23 23 0 0 00:36:30.635 asserts 152 152 152 0 n/a 00:36:30.635 00:36:30.635 Elapsed time = 1.270 seconds 00:36:30.896 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:30.896 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.896 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:30.896 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.896 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:30.896 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:30.896 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:30.896 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:30.897 rmmod nvme_tcp 00:36:30.897 rmmod nvme_fabrics 00:36:30.897 rmmod nvme_keyring 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 4162283 ']' 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 4162283 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 4162283 ']' 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 4162283 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4162283 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4162283' 00:36:30.897 killing process with pid 4162283 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 4162283 00:36:30.897 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 4162283 00:36:31.158 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:31.158 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:31.158 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:31.158 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:36:31.158 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:36:31.158 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:31.158 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:36:31.158 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:31.158 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:31.158 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.158 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:31.158 10:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.705 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:33.705 00:36:33.705 real 0m12.395s 00:36:33.705 user 0m9.909s 00:36:33.705 sys 0m6.667s 00:36:33.705 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:33.705 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:33.705 ************************************ 00:36:33.705 END TEST nvmf_bdevio 00:36:33.705 ************************************ 00:36:33.705 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:33.705 00:36:33.705 real 5m1.061s 00:36:33.705 user 10m27.109s 00:36:33.705 sys 2m5.556s 00:36:33.705 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:33.705 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:33.705 ************************************ 00:36:33.705 END TEST nvmf_target_core_interrupt_mode 00:36:33.705 ************************************ 00:36:33.705 10:08:48 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:33.705 10:08:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:33.705 10:08:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:33.705 10:08:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:33.705 ************************************ 00:36:33.705 START TEST nvmf_interrupt 00:36:33.705 ************************************ 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:33.705 * Looking for test storage... 00:36:33.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:36:33.705 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:33.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.706 --rc genhtml_branch_coverage=1 00:36:33.706 --rc genhtml_function_coverage=1 00:36:33.706 --rc genhtml_legend=1 00:36:33.706 --rc geninfo_all_blocks=1 00:36:33.706 --rc geninfo_unexecuted_blocks=1 00:36:33.706 00:36:33.706 ' 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:33.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.706 --rc genhtml_branch_coverage=1 00:36:33.706 --rc genhtml_function_coverage=1 00:36:33.706 --rc genhtml_legend=1 00:36:33.706 --rc geninfo_all_blocks=1 00:36:33.706 --rc geninfo_unexecuted_blocks=1 00:36:33.706 00:36:33.706 ' 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:33.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.706 --rc genhtml_branch_coverage=1 00:36:33.706 --rc genhtml_function_coverage=1 00:36:33.706 --rc genhtml_legend=1 00:36:33.706 --rc geninfo_all_blocks=1 00:36:33.706 --rc geninfo_unexecuted_blocks=1 00:36:33.706 00:36:33.706 ' 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:33.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.706 --rc genhtml_branch_coverage=1 00:36:33.706 --rc genhtml_function_coverage=1 00:36:33.706 --rc genhtml_legend=1 00:36:33.706 --rc geninfo_all_blocks=1 00:36:33.706 --rc geninfo_unexecuted_blocks=1 00:36:33.706 00:36:33.706 ' 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:36:33.706 10:08:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:41.852 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:41.852 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:41.852 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:41.853 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:41.853 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:41.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:41.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:36:41.853 00:36:41.853 --- 10.0.0.2 ping statistics --- 00:36:41.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:41.853 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:41.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:41.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:36:41.853 00:36:41.853 --- 10.0.0.1 ping statistics --- 00:36:41.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:41.853 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=4166981 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 4166981 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 4166981 ']' 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:41.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:41.853 10:08:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:41.853 [2024-11-27 10:08:56.549993] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:41.853 [2024-11-27 10:08:56.551120] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:36:41.853 [2024-11-27 10:08:56.551177] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:41.853 [2024-11-27 10:08:56.651360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:41.853 [2024-11-27 10:08:56.702667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:41.853 [2024-11-27 10:08:56.702718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:41.853 [2024-11-27 10:08:56.702726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:41.853 [2024-11-27 10:08:56.702733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:41.853 [2024-11-27 10:08:56.702739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:41.853 [2024-11-27 10:08:56.704533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.853 [2024-11-27 10:08:56.704661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:41.853 [2024-11-27 10:08:56.780516] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:41.853 [2024-11-27 10:08:56.781024] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:41.853 [2024-11-27 10:08:56.781373] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:36:42.115 5000+0 records in 00:36:42.115 5000+0 records out 00:36:42.115 10240000 bytes (10 MB, 9.8 MiB) copied, 0.01812 s, 565 MB/s 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:42.115 AIO0 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:42.115 [2024-11-27 10:08:57.489660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:42.115 [2024-11-27 10:08:57.534059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4166981 0 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4166981 0 idle 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4166981 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4166981 -w 256 00:36:42.115 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4166981 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.31 reactor_0' 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4166981 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.31 reactor_0 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4166981 1 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4166981 1 idle 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4166981 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4166981 -w 256 00:36:42.377 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4166986 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4166986 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=4167353 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4166981 0 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4166981 0 busy 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4166981 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4166981 -w 256 00:36:42.638 10:08:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:42.638 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4166981 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.47 reactor_0' 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4166981 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.47 reactor_0 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4166981 1 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4166981 1 busy 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4166981 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4166981 -w 256 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4166986 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:00.26 reactor_1' 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4166986 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:00.26 reactor_1 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:42.900 10:08:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 4167353 00:36:52.907 Initializing NVMe Controllers 00:36:52.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:52.907 Controller IO queue size 256, less than required. 00:36:52.907 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:52.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:52.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:52.907 Initialization complete. Launching workers. 00:36:52.907 ======================================================== 00:36:52.907 Latency(us) 00:36:52.907 Device Information : IOPS MiB/s Average min max 00:36:52.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18242.99 71.26 14037.27 3775.32 31037.72 00:36:52.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18870.48 73.71 13570.67 4241.60 32402.30 00:36:52.907 ======================================================== 00:36:52.907 Total : 37113.48 144.97 13800.02 3775.32 32402.30 00:36:52.907 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4166981 0 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4166981 0 idle 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4166981 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4166981 -w 256 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4166981 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.32 reactor_0' 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4166981 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.32 reactor_0 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4166981 1 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4166981 1 idle 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4166981 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4166981 -w 256 00:36:52.907 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:53.168 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4166986 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:36:53.168 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4166986 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:36:53.168 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:53.168 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:53.168 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:53.168 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:53.168 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:53.168 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:53.168 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:53.168 10:09:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:53.168 10:09:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:53.741 10:09:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:53.741 10:09:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:36:53.741 10:09:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:53.741 10:09:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:53.741 10:09:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4166981 0 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4166981 0 idle 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4166981 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:55.652 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:55.912 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4166981 -w 256 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4166981 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.71 reactor_0' 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4166981 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.71 reactor_0 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4166981 1 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4166981 1 idle 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4166981 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4166981 -w 256 00:36:55.913 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:56.174 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4166986 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:36:56.174 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4166986 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:36:56.174 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:56.174 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:56.174 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:56.174 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:56.174 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:56.174 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:56.174 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:56.174 10:09:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:56.174 10:09:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:56.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:56.435 rmmod nvme_tcp 00:36:56.435 rmmod nvme_fabrics 00:36:56.435 rmmod nvme_keyring 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 4166981 ']' 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 4166981 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 4166981 ']' 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 4166981 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:56.435 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4166981 00:36:56.695 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:56.695 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:56.695 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4166981' 00:36:56.695 killing process with pid 4166981 00:36:56.695 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 4166981 00:36:56.695 10:09:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 4166981 00:36:56.695 10:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:56.695 10:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:56.695 10:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:56.695 10:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:56.695 10:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:36:56.695 10:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:56.695 10:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:36:56.695 10:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:56.695 10:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:56.695 10:09:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:56.695 10:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:56.695 10:09:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.239 10:09:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:59.239 00:36:59.239 real 0m25.451s 00:36:59.239 user 0m39.998s 00:36:59.239 sys 0m10.120s 00:36:59.239 10:09:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:59.239 10:09:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:59.239 ************************************ 00:36:59.239 END TEST nvmf_interrupt 00:36:59.239 ************************************ 00:36:59.239 00:36:59.239 real 30m11.297s 00:36:59.239 user 61m47.274s 00:36:59.239 sys 10m20.866s 00:36:59.239 10:09:14 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:59.239 10:09:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:59.239 ************************************ 00:36:59.239 END TEST nvmf_tcp 00:36:59.239 ************************************ 00:36:59.239 10:09:14 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:36:59.239 10:09:14 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:59.239 10:09:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:59.239 10:09:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:59.239 10:09:14 -- common/autotest_common.sh@10 -- # set +x 00:36:59.239 ************************************ 00:36:59.239 START TEST spdkcli_nvmf_tcp 00:36:59.239 ************************************ 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:59.239 * Looking for test storage... 00:36:59.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:59.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.239 --rc genhtml_branch_coverage=1 00:36:59.239 --rc genhtml_function_coverage=1 00:36:59.239 --rc genhtml_legend=1 00:36:59.239 --rc geninfo_all_blocks=1 00:36:59.239 --rc geninfo_unexecuted_blocks=1 00:36:59.239 00:36:59.239 ' 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:59.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.239 --rc genhtml_branch_coverage=1 00:36:59.239 --rc genhtml_function_coverage=1 00:36:59.239 --rc genhtml_legend=1 00:36:59.239 --rc geninfo_all_blocks=1 00:36:59.239 --rc geninfo_unexecuted_blocks=1 00:36:59.239 00:36:59.239 ' 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:59.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.239 --rc genhtml_branch_coverage=1 00:36:59.239 --rc genhtml_function_coverage=1 00:36:59.239 --rc genhtml_legend=1 00:36:59.239 --rc geninfo_all_blocks=1 00:36:59.239 --rc geninfo_unexecuted_blocks=1 00:36:59.239 00:36:59.239 ' 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:59.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.239 --rc genhtml_branch_coverage=1 00:36:59.239 --rc genhtml_function_coverage=1 00:36:59.239 --rc genhtml_legend=1 00:36:59.239 --rc geninfo_all_blocks=1 00:36:59.239 --rc geninfo_unexecuted_blocks=1 00:36:59.239 00:36:59.239 ' 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:59.239 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:59.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4171098 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4171098 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 4171098 ']' 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:59.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:59.240 10:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:59.240 [2024-11-27 10:09:14.592577] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:36:59.240 [2024-11-27 10:09:14.592643] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171098 ] 00:36:59.240 [2024-11-27 10:09:14.682856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:59.500 [2024-11-27 10:09:14.736686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:59.500 [2024-11-27 10:09:14.736692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.071 10:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:00.072 10:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:37:00.072 10:09:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:00.072 10:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:00.072 10:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:00.072 10:09:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:00.072 10:09:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:00.072 10:09:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:00.072 10:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:00.072 10:09:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:00.072 10:09:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:00.072 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:00.072 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:00.072 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:00.072 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:00.072 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:00.072 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:00.072 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:00.072 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:00.072 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:00.072 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:00.072 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:00.072 ' 00:37:03.372 [2024-11-27 10:09:18.133258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:04.313 [2024-11-27 10:09:19.497496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:06.858 [2024-11-27 10:09:22.020575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:08.770 [2024-11-27 10:09:24.226776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:10.683 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:10.683 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:10.683 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:10.683 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:10.683 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:10.683 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:10.683 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:10.683 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:10.683 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:10.683 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:10.683 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:10.683 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:10.683 10:09:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:10.683 10:09:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:10.683 10:09:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:10.683 10:09:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:10.683 10:09:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:10.683 10:09:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:10.683 10:09:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:10.683 10:09:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:11.254 10:09:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:11.254 10:09:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:11.254 10:09:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:11.254 10:09:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:11.254 10:09:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:11.254 10:09:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:11.254 10:09:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:11.254 10:09:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:11.254 10:09:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:11.254 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:11.254 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:11.254 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:11.254 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:11.254 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:11.254 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:11.254 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:11.254 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:11.254 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:11.254 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:11.254 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:11.254 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:11.254 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:11.254 ' 00:37:17.842 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:17.842 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:17.842 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:17.842 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:17.842 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:17.842 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:17.842 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:17.842 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:17.842 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:17.842 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:17.842 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:17.842 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:17.842 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:17.842 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4171098 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 4171098 ']' 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 4171098 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4171098 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4171098' 00:37:17.842 killing process with pid 4171098 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 4171098 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 4171098 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4171098 ']' 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4171098 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 4171098 ']' 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 4171098 00:37:17.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4171098) - No such process 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 4171098 is not found' 00:37:17.842 Process with pid 4171098 is not found 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:17.842 00:37:17.842 real 0m18.101s 00:37:17.842 user 0m40.215s 00:37:17.842 sys 0m0.861s 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:17.842 10:09:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:17.842 ************************************ 00:37:17.842 END TEST spdkcli_nvmf_tcp 00:37:17.842 ************************************ 00:37:17.842 10:09:32 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:17.843 10:09:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:17.843 10:09:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:17.843 10:09:32 -- common/autotest_common.sh@10 -- # set +x 00:37:17.843 ************************************ 00:37:17.843 START TEST nvmf_identify_passthru 00:37:17.843 ************************************ 00:37:17.843 10:09:32 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:17.843 * Looking for test storage... 00:37:17.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:17.843 10:09:32 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:17.843 10:09:32 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:37:17.843 10:09:32 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:17.843 10:09:32 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:17.843 10:09:32 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:17.843 10:09:32 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:17.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.843 --rc genhtml_branch_coverage=1 00:37:17.843 --rc genhtml_function_coverage=1 00:37:17.843 --rc genhtml_legend=1 00:37:17.843 --rc geninfo_all_blocks=1 00:37:17.843 --rc geninfo_unexecuted_blocks=1 00:37:17.843 00:37:17.843 ' 00:37:17.843 10:09:32 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:17.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.843 --rc genhtml_branch_coverage=1 00:37:17.843 --rc genhtml_function_coverage=1 00:37:17.843 --rc genhtml_legend=1 00:37:17.843 --rc geninfo_all_blocks=1 00:37:17.843 --rc geninfo_unexecuted_blocks=1 00:37:17.843 00:37:17.843 ' 00:37:17.843 10:09:32 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:17.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.843 --rc genhtml_branch_coverage=1 00:37:17.843 --rc genhtml_function_coverage=1 00:37:17.843 --rc genhtml_legend=1 00:37:17.843 --rc geninfo_all_blocks=1 00:37:17.843 --rc geninfo_unexecuted_blocks=1 00:37:17.843 00:37:17.843 ' 00:37:17.843 10:09:32 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:17.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.843 --rc genhtml_branch_coverage=1 00:37:17.843 --rc genhtml_function_coverage=1 00:37:17.843 --rc genhtml_legend=1 00:37:17.843 --rc geninfo_all_blocks=1 00:37:17.843 --rc geninfo_unexecuted_blocks=1 00:37:17.843 00:37:17.843 ' 00:37:17.843 10:09:32 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:17.843 10:09:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.843 10:09:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.843 10:09:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.843 10:09:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:17.843 10:09:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:17.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:17.843 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:17.843 10:09:32 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:17.843 10:09:32 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:17.843 10:09:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.843 10:09:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.844 10:09:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.844 10:09:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:17.844 10:09:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.844 10:09:32 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:17.844 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:17.844 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:17.844 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:17.844 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:17.844 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:17.844 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:17.844 10:09:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:17.844 10:09:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:17.844 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:17.844 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:17.844 10:09:32 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:37:17.844 10:09:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:24.428 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:24.428 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:24.428 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:24.428 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:24.429 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:24.429 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:24.689 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:24.690 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:24.690 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:24.690 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:24.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:24.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:37:24.690 00:37:24.690 --- 10.0.0.2 ping statistics --- 00:37:24.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.690 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:37:24.690 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:24.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:24.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:37:24.690 00:37:24.690 --- 10.0.0.1 ping statistics --- 00:37:24.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.690 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:37:24.690 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:24.690 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:37:24.690 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:24.690 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:24.690 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:24.690 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:24.690 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:24.690 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:24.690 10:09:39 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:24.690 10:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:24.690 10:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:37:24.690 10:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:37:24.690 10:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:37:24.690 10:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:37:24.690 10:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:24.690 10:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:24.690 10:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:25.262 10:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:37:25.262 10:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:25.262 10:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:25.262 10:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:25.833 10:09:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:37:25.833 10:09:41 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:25.833 10:09:41 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:25.833 10:09:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:25.833 10:09:41 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:25.833 10:09:41 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:25.833 10:09:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:25.833 10:09:41 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4178534 00:37:25.833 10:09:41 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:25.833 10:09:41 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:25.833 10:09:41 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4178534 00:37:25.833 10:09:41 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 4178534 ']' 00:37:25.833 10:09:41 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:25.833 10:09:41 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:25.833 10:09:41 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:25.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:25.833 10:09:41 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:25.833 10:09:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:25.833 [2024-11-27 10:09:41.227495] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:37:25.833 [2024-11-27 10:09:41.227547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:26.094 [2024-11-27 10:09:41.320335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:26.094 [2024-11-27 10:09:41.357904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:26.094 [2024-11-27 10:09:41.357937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:26.094 [2024-11-27 10:09:41.357945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:26.094 [2024-11-27 10:09:41.357952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:26.094 [2024-11-27 10:09:41.357958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:26.094 [2024-11-27 10:09:41.359472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:26.094 [2024-11-27 10:09:41.359622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:26.094 [2024-11-27 10:09:41.359768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:26.094 [2024-11-27 10:09:41.359769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:26.744 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:26.744 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:37:26.744 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:26.744 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.744 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:26.744 INFO: Log level set to 20 00:37:26.744 INFO: Requests: 00:37:26.744 { 00:37:26.744 "jsonrpc": "2.0", 00:37:26.744 "method": "nvmf_set_config", 00:37:26.744 "id": 1, 00:37:26.744 "params": { 00:37:26.744 "admin_cmd_passthru": { 00:37:26.744 "identify_ctrlr": true 00:37:26.744 } 00:37:26.744 } 00:37:26.744 } 00:37:26.744 00:37:26.744 INFO: response: 00:37:26.744 { 00:37:26.744 "jsonrpc": "2.0", 00:37:26.744 "id": 1, 00:37:26.744 "result": true 00:37:26.744 } 00:37:26.744 00:37:26.744 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.744 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:26.744 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.744 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:26.744 INFO: Setting log level to 20 00:37:26.744 INFO: Setting log level to 20 00:37:26.744 INFO: Log level set to 20 00:37:26.744 INFO: Log level set to 20 00:37:26.744 INFO: Requests: 00:37:26.744 { 00:37:26.744 "jsonrpc": "2.0", 00:37:26.744 "method": "framework_start_init", 00:37:26.744 "id": 1 00:37:26.744 } 00:37:26.744 00:37:26.744 INFO: Requests: 00:37:26.744 { 00:37:26.744 "jsonrpc": "2.0", 00:37:26.744 "method": "framework_start_init", 00:37:26.744 "id": 1 00:37:26.744 } 00:37:26.744 00:37:26.744 [2024-11-27 10:09:42.097443] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:26.744 INFO: response: 00:37:26.744 { 00:37:26.744 "jsonrpc": "2.0", 00:37:26.744 "id": 1, 00:37:26.744 "result": true 00:37:26.744 } 00:37:26.744 00:37:26.744 INFO: response: 00:37:26.744 { 00:37:26.744 "jsonrpc": "2.0", 00:37:26.744 "id": 1, 00:37:26.744 "result": true 00:37:26.744 } 00:37:26.744 00:37:26.744 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.744 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:26.744 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.745 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:26.745 INFO: Setting log level to 40 00:37:26.745 INFO: Setting log level to 40 00:37:26.745 INFO: Setting log level to 40 00:37:26.745 [2024-11-27 10:09:42.111016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:26.745 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.745 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:26.745 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:26.745 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:26.745 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:37:26.745 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.745 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:27.033 Nvme0n1 00:37:27.033 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.033 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:27.033 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.033 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:27.033 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.033 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:27.033 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.033 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:27.297 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.297 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:27.297 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.297 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:27.297 [2024-11-27 10:09:42.521460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:27.297 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.297 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:27.297 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.297 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:27.297 [ 00:37:27.297 { 00:37:27.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:27.297 "subtype": "Discovery", 00:37:27.297 "listen_addresses": [], 00:37:27.297 "allow_any_host": true, 00:37:27.297 "hosts": [] 00:37:27.297 }, 00:37:27.297 { 00:37:27.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:27.297 "subtype": "NVMe", 00:37:27.297 "listen_addresses": [ 00:37:27.297 { 00:37:27.297 "trtype": "TCP", 00:37:27.297 "adrfam": "IPv4", 00:37:27.297 "traddr": "10.0.0.2", 00:37:27.297 "trsvcid": "4420" 00:37:27.297 } 00:37:27.297 ], 00:37:27.297 "allow_any_host": true, 00:37:27.297 "hosts": [], 00:37:27.297 "serial_number": "SPDK00000000000001", 00:37:27.297 "model_number": "SPDK bdev Controller", 00:37:27.297 "max_namespaces": 1, 00:37:27.297 "min_cntlid": 1, 00:37:27.297 "max_cntlid": 65519, 00:37:27.297 "namespaces": [ 00:37:27.297 { 00:37:27.297 "nsid": 1, 00:37:27.297 "bdev_name": "Nvme0n1", 00:37:27.297 "name": "Nvme0n1", 00:37:27.297 "nguid": "36344730526054870025384500000044", 00:37:27.297 "uuid": "36344730-5260-5487-0025-384500000044" 00:37:27.297 } 00:37:27.297 ] 00:37:27.297 } 00:37:27.297 ] 00:37:27.297 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.297 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:27.297 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:27.297 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:27.297 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:37:27.297 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:27.297 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:27.297 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:27.558 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:37:27.558 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:37:27.558 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:37:27.558 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:27.558 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.558 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:27.558 10:09:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.558 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:27.558 10:09:42 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:27.558 10:09:42 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:27.558 10:09:42 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:27.558 10:09:42 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:27.558 10:09:42 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:27.558 10:09:42 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:27.558 10:09:42 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:27.558 rmmod nvme_tcp 00:37:27.558 rmmod nvme_fabrics 00:37:27.558 rmmod nvme_keyring 00:37:27.819 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:27.819 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:27.819 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:27.819 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 4178534 ']' 00:37:27.819 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 4178534 00:37:27.819 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 4178534 ']' 00:37:27.819 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 4178534 00:37:27.819 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:37:27.819 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.819 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4178534 00:37:27.819 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:27.819 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:27.819 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4178534' 00:37:27.819 killing process with pid 4178534 00:37:27.819 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 4178534 00:37:27.819 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 4178534 00:37:28.080 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:28.080 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:28.080 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:28.080 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:37:28.080 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:37:28.080 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:28.080 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:37:28.080 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:28.080 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:28.080 10:09:43 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.080 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:28.080 10:09:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.993 10:09:45 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:29.993 00:37:29.993 real 0m12.966s 00:37:29.993 user 0m10.198s 00:37:29.993 sys 0m6.556s 00:37:29.993 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:29.993 10:09:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:29.993 ************************************ 00:37:29.993 END TEST nvmf_identify_passthru 00:37:29.993 ************************************ 00:37:30.253 10:09:45 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:30.253 10:09:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:30.253 10:09:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:30.253 10:09:45 -- common/autotest_common.sh@10 -- # set +x 00:37:30.253 ************************************ 00:37:30.253 START TEST nvmf_dif 00:37:30.253 ************************************ 00:37:30.253 10:09:45 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:30.253 * Looking for test storage... 00:37:30.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:30.253 10:09:45 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:30.253 10:09:45 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:37:30.253 10:09:45 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:30.253 10:09:45 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:30.253 10:09:45 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:30.253 10:09:45 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:30.253 10:09:45 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:30.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.253 --rc genhtml_branch_coverage=1 00:37:30.253 --rc genhtml_function_coverage=1 00:37:30.253 --rc genhtml_legend=1 00:37:30.253 --rc geninfo_all_blocks=1 00:37:30.253 --rc geninfo_unexecuted_blocks=1 00:37:30.253 00:37:30.253 ' 00:37:30.253 10:09:45 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:30.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.253 --rc genhtml_branch_coverage=1 00:37:30.253 --rc genhtml_function_coverage=1 00:37:30.253 --rc genhtml_legend=1 00:37:30.253 --rc geninfo_all_blocks=1 00:37:30.253 --rc geninfo_unexecuted_blocks=1 00:37:30.253 00:37:30.253 ' 00:37:30.253 10:09:45 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:30.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.253 --rc genhtml_branch_coverage=1 00:37:30.253 --rc genhtml_function_coverage=1 00:37:30.253 --rc genhtml_legend=1 00:37:30.253 --rc geninfo_all_blocks=1 00:37:30.253 --rc geninfo_unexecuted_blocks=1 00:37:30.253 00:37:30.253 ' 00:37:30.253 10:09:45 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:30.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.253 --rc genhtml_branch_coverage=1 00:37:30.253 --rc genhtml_function_coverage=1 00:37:30.253 --rc genhtml_legend=1 00:37:30.253 --rc geninfo_all_blocks=1 00:37:30.253 --rc geninfo_unexecuted_blocks=1 00:37:30.253 00:37:30.253 ' 00:37:30.253 10:09:45 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:30.253 10:09:45 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:30.513 10:09:45 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:30.513 10:09:45 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:30.513 10:09:45 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:30.513 10:09:45 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:30.513 10:09:45 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:30.513 10:09:45 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.514 10:09:45 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.514 10:09:45 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.514 10:09:45 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:30.514 10:09:45 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:30.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:30.514 10:09:45 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:30.514 10:09:45 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:30.514 10:09:45 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:30.514 10:09:45 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:30.514 10:09:45 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:30.514 10:09:45 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:30.514 10:09:45 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:30.514 10:09:45 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:37:30.514 10:09:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:38.656 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:38.656 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:38.656 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:38.656 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:38.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:38.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:37:38.656 00:37:38.656 --- 10.0.0.2 ping statistics --- 00:37:38.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:38.656 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:38.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:38.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:37:38.656 00:37:38.656 --- 10.0.0.1 ping statistics --- 00:37:38.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:38.656 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:38.656 10:09:52 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:41.206 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:41.206 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:41.206 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:41.467 10:09:56 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:41.467 10:09:56 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:41.467 10:09:56 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:41.467 10:09:56 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:41.467 10:09:56 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:41.467 10:09:56 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:41.467 10:09:56 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:41.467 10:09:56 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:41.467 10:09:56 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:41.467 10:09:56 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:41.467 10:09:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:41.467 10:09:56 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=4184502 00:37:41.467 10:09:56 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 4184502 00:37:41.467 10:09:56 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 4184502 ']' 00:37:41.467 10:09:56 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.467 10:09:56 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:41.467 10:09:56 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:41.467 10:09:56 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.467 10:09:56 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:41.467 10:09:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:41.467 [2024-11-27 10:09:56.905346] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:37:41.467 [2024-11-27 10:09:56.905408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:41.728 [2024-11-27 10:09:57.005455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:41.728 [2024-11-27 10:09:57.057415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:41.728 [2024-11-27 10:09:57.057469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:41.728 [2024-11-27 10:09:57.057477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:41.728 [2024-11-27 10:09:57.057484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:41.728 [2024-11-27 10:09:57.057491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:41.728 [2024-11-27 10:09:57.058297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.300 10:09:57 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:42.300 10:09:57 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:37:42.300 10:09:57 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:42.300 10:09:57 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:42.300 10:09:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:42.300 10:09:57 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:42.300 10:09:57 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:42.300 10:09:57 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:42.300 10:09:57 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.300 10:09:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:42.300 [2024-11-27 10:09:57.764314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:42.562 10:09:57 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.562 10:09:57 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:42.562 10:09:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:42.562 10:09:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:42.562 10:09:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:42.562 ************************************ 00:37:42.562 START TEST fio_dif_1_default 00:37:42.562 ************************************ 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:42.562 bdev_null0 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:42.562 [2024-11-27 10:09:57.852763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:42.562 { 00:37:42.562 "params": { 00:37:42.562 "name": "Nvme$subsystem", 00:37:42.562 "trtype": "$TEST_TRANSPORT", 00:37:42.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:42.562 "adrfam": "ipv4", 00:37:42.562 "trsvcid": "$NVMF_PORT", 00:37:42.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:42.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:42.562 "hdgst": ${hdgst:-false}, 00:37:42.562 "ddgst": ${ddgst:-false} 00:37:42.562 }, 00:37:42.562 "method": "bdev_nvme_attach_controller" 00:37:42.562 } 00:37:42.562 EOF 00:37:42.562 )") 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:42.562 "params": { 00:37:42.562 "name": "Nvme0", 00:37:42.562 "trtype": "tcp", 00:37:42.562 "traddr": "10.0.0.2", 00:37:42.562 "adrfam": "ipv4", 00:37:42.562 "trsvcid": "4420", 00:37:42.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:42.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:42.562 "hdgst": false, 00:37:42.562 "ddgst": false 00:37:42.562 }, 00:37:42.562 "method": "bdev_nvme_attach_controller" 00:37:42.562 }' 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:42.562 10:09:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:42.822 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:42.822 fio-3.35 00:37:42.822 Starting 1 thread 00:37:55.061 00:37:55.061 filename0: (groupid=0, jobs=1): err= 0: pid=4185101: Wed Nov 27 10:10:09 2024 00:37:55.061 read: IOPS=98, BW=394KiB/s (403kB/s)(3952KiB/10039msec) 00:37:55.061 slat (nsec): min=5458, max=98280, avg=6360.43, stdev=3461.00 00:37:55.061 clat (usec): min=857, max=44520, avg=40624.11, stdev=4419.24 00:37:55.061 lat (usec): min=863, max=44563, avg=40630.47, stdev=4418.25 00:37:55.061 clat percentiles (usec): 00:37:55.061 | 1.00th=[ 1045], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:55.061 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:55.061 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:37:55.061 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:37:55.061 | 99.99th=[44303] 00:37:55.062 bw ( KiB/s): min= 384, max= 448, per=99.83%, avg=393.60, stdev=18.28, samples=20 00:37:55.062 iops : min= 96, max= 112, avg=98.40, stdev= 4.57, samples=20 00:37:55.062 lat (usec) : 1000=0.81% 00:37:55.062 lat (msec) : 2=0.40%, 50=98.79% 00:37:55.062 cpu : usr=93.03%, sys=6.71%, ctx=13, majf=0, minf=219 00:37:55.062 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:55.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.062 issued rwts: total=988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.062 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:55.062 00:37:55.062 Run status group 0 (all jobs): 00:37:55.062 READ: bw=394KiB/s (403kB/s), 394KiB/s-394KiB/s (403kB/s-403kB/s), io=3952KiB (4047kB), run=10039-10039msec 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.062 00:37:55.062 real 0m11.359s 00:37:55.062 user 0m15.915s 00:37:55.062 sys 0m1.156s 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:55.062 ************************************ 00:37:55.062 END TEST fio_dif_1_default 00:37:55.062 ************************************ 00:37:55.062 10:10:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:55.062 10:10:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:55.062 10:10:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:55.062 10:10:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:55.062 ************************************ 00:37:55.062 START TEST fio_dif_1_multi_subsystems 00:37:55.062 ************************************ 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:55.062 bdev_null0 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:55.062 [2024-11-27 10:10:09.291295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:55.062 bdev_null1 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:55.062 { 00:37:55.062 "params": { 00:37:55.062 "name": "Nvme$subsystem", 00:37:55.062 "trtype": "$TEST_TRANSPORT", 00:37:55.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:55.062 "adrfam": "ipv4", 00:37:55.062 "trsvcid": "$NVMF_PORT", 00:37:55.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:55.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:55.062 "hdgst": ${hdgst:-false}, 00:37:55.062 "ddgst": ${ddgst:-false} 00:37:55.062 }, 00:37:55.062 "method": "bdev_nvme_attach_controller" 00:37:55.062 } 00:37:55.062 EOF 00:37:55.062 )") 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:55.062 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:55.062 { 00:37:55.062 "params": { 00:37:55.063 "name": "Nvme$subsystem", 00:37:55.063 "trtype": "$TEST_TRANSPORT", 00:37:55.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:55.063 "adrfam": "ipv4", 00:37:55.063 "trsvcid": "$NVMF_PORT", 00:37:55.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:55.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:55.063 "hdgst": ${hdgst:-false}, 00:37:55.063 "ddgst": ${ddgst:-false} 00:37:55.063 }, 00:37:55.063 "method": "bdev_nvme_attach_controller" 00:37:55.063 } 00:37:55.063 EOF 00:37:55.063 )") 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:55.063 "params": { 00:37:55.063 "name": "Nvme0", 00:37:55.063 "trtype": "tcp", 00:37:55.063 "traddr": "10.0.0.2", 00:37:55.063 "adrfam": "ipv4", 00:37:55.063 "trsvcid": "4420", 00:37:55.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:55.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:55.063 "hdgst": false, 00:37:55.063 "ddgst": false 00:37:55.063 }, 00:37:55.063 "method": "bdev_nvme_attach_controller" 00:37:55.063 },{ 00:37:55.063 "params": { 00:37:55.063 "name": "Nvme1", 00:37:55.063 "trtype": "tcp", 00:37:55.063 "traddr": "10.0.0.2", 00:37:55.063 "adrfam": "ipv4", 00:37:55.063 "trsvcid": "4420", 00:37:55.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:55.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:55.063 "hdgst": false, 00:37:55.063 "ddgst": false 00:37:55.063 }, 00:37:55.063 "method": "bdev_nvme_attach_controller" 00:37:55.063 }' 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:55.063 10:10:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:55.063 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:55.063 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:55.063 fio-3.35 00:37:55.063 Starting 2 threads 00:38:07.308 00:38:07.308 filename0: (groupid=0, jobs=1): err= 0: pid=4187446: Wed Nov 27 10:10:20 2024 00:38:07.309 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10007msec) 00:38:07.309 slat (nsec): min=5462, max=31031, avg=6559.66, stdev=1617.83 00:38:07.309 clat (usec): min=735, max=42193, avg=40822.94, stdev=2568.61 00:38:07.309 lat (usec): min=743, max=42224, avg=40829.50, stdev=2568.40 00:38:07.309 clat percentiles (usec): 00:38:07.309 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:38:07.309 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:07.309 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:07.309 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:38:07.309 | 99.99th=[42206] 00:38:07.309 bw ( KiB/s): min= 384, max= 416, per=34.56%, avg=390.40, stdev=13.13, samples=20 00:38:07.309 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:38:07.309 lat (usec) : 750=0.20%, 1000=0.20% 00:38:07.309 lat (msec) : 50=99.59% 00:38:07.309 cpu : usr=95.44%, sys=4.32%, ctx=13, majf=0, minf=114 00:38:07.309 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:07.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:07.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:07.309 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:07.309 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:07.309 filename1: (groupid=0, jobs=1): err= 0: pid=4187447: Wed Nov 27 10:10:20 2024 00:38:07.309 read: IOPS=184, BW=737KiB/s (754kB/s)(7376KiB/10011msec) 00:38:07.309 slat (nsec): min=5461, max=66190, avg=6407.15, stdev=2052.46 00:38:07.309 clat (usec): min=378, max=42507, avg=21698.02, stdev=20290.96 00:38:07.309 lat (usec): min=384, max=42513, avg=21704.43, stdev=20290.86 00:38:07.309 clat percentiles (usec): 00:38:07.309 | 1.00th=[ 396], 5.00th=[ 478], 10.00th=[ 644], 20.00th=[ 676], 00:38:07.309 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[40633], 60.00th=[41157], 00:38:07.309 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:07.309 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:38:07.309 | 99.99th=[42730] 00:38:07.309 bw ( KiB/s): min= 576, max= 768, per=65.23%, avg=736.00, stdev=57.81, samples=20 00:38:07.309 iops : min= 144, max= 192, avg=184.00, stdev=14.45, samples=20 00:38:07.309 lat (usec) : 500=5.21%, 750=42.41%, 1000=0.54% 00:38:07.309 lat (msec) : 50=51.84% 00:38:07.309 cpu : usr=95.50%, sys=4.27%, ctx=14, majf=0, minf=228 00:38:07.309 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:07.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:07.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:07.309 issued rwts: total=1844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:07.309 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:07.309 00:38:07.309 Run status group 0 (all jobs): 00:38:07.309 READ: bw=1128KiB/s (1155kB/s), 392KiB/s-737KiB/s (401kB/s-754kB/s), io=11.0MiB (11.6MB), run=10007-10011msec 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.309 00:38:07.309 real 0m11.607s 00:38:07.309 user 0m38.175s 00:38:07.309 sys 0m1.271s 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:07.309 10:10:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:07.309 ************************************ 00:38:07.309 END TEST fio_dif_1_multi_subsystems 00:38:07.309 ************************************ 00:38:07.309 10:10:20 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:07.309 10:10:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:07.309 10:10:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:07.309 10:10:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:07.309 ************************************ 00:38:07.309 START TEST fio_dif_rand_params 00:38:07.309 ************************************ 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:07.309 bdev_null0 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:07.309 [2024-11-27 10:10:20.980240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:07.309 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:07.310 { 00:38:07.310 "params": { 00:38:07.310 "name": "Nvme$subsystem", 00:38:07.310 "trtype": "$TEST_TRANSPORT", 00:38:07.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:07.310 "adrfam": "ipv4", 00:38:07.310 "trsvcid": "$NVMF_PORT", 00:38:07.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:07.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:07.310 "hdgst": ${hdgst:-false}, 00:38:07.310 "ddgst": ${ddgst:-false} 00:38:07.310 }, 00:38:07.310 "method": "bdev_nvme_attach_controller" 00:38:07.310 } 00:38:07.310 EOF 00:38:07.310 )") 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:07.310 10:10:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:07.310 "params": { 00:38:07.310 "name": "Nvme0", 00:38:07.310 "trtype": "tcp", 00:38:07.310 "traddr": "10.0.0.2", 00:38:07.310 "adrfam": "ipv4", 00:38:07.310 "trsvcid": "4420", 00:38:07.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:07.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:07.310 "hdgst": false, 00:38:07.310 "ddgst": false 00:38:07.310 }, 00:38:07.310 "method": "bdev_nvme_attach_controller" 00:38:07.310 }' 00:38:07.310 10:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:07.310 10:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:07.310 10:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:07.310 10:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:07.310 10:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:07.310 10:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:07.310 10:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:07.310 10:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:07.310 10:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:07.310 10:10:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:07.310 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:07.310 ... 00:38:07.310 fio-3.35 00:38:07.310 Starting 3 threads 00:38:12.604 00:38:12.604 filename0: (groupid=0, jobs=1): err= 0: pid=4189684: Wed Nov 27 10:10:27 2024 00:38:12.604 read: IOPS=346, BW=43.3MiB/s (45.4MB/s)(219MiB/5047msec) 00:38:12.604 slat (nsec): min=8056, max=32504, avg=8903.98, stdev=1324.23 00:38:12.604 clat (usec): min=3995, max=88800, avg=8624.91, stdev=6888.25 00:38:12.604 lat (usec): min=4004, max=88810, avg=8633.81, stdev=6888.40 00:38:12.604 clat percentiles (usec): 00:38:12.604 | 1.00th=[ 5145], 5.00th=[ 5735], 10.00th=[ 5997], 20.00th=[ 6521], 00:38:12.604 | 30.00th=[ 6915], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7701], 00:38:12.604 | 70.00th=[ 8094], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[ 9896], 00:38:12.604 | 99.00th=[47449], 99.50th=[49021], 99.90th=[50594], 99.95th=[88605], 00:38:12.604 | 99.99th=[88605] 00:38:12.604 bw ( KiB/s): min=17664, max=52736, per=40.73%, avg=44672.00, stdev=10661.72, samples=10 00:38:12.604 iops : min= 138, max= 412, avg=349.00, stdev=83.29, samples=10 00:38:12.604 lat (msec) : 4=0.06%, 10=95.82%, 20=1.32%, 50=2.57%, 100=0.23% 00:38:12.604 cpu : usr=94.55%, sys=5.19%, ctx=7, majf=0, minf=60 00:38:12.604 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:12.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:12.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:12.604 issued rwts: total=1748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:12.604 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:12.604 filename0: (groupid=0, jobs=1): err= 0: pid=4189685: Wed Nov 27 10:10:27 2024 00:38:12.604 read: IOPS=148, BW=18.5MiB/s (19.4MB/s)(92.8MiB/5006msec) 00:38:12.604 slat (nsec): min=5677, max=32619, avg=8611.12, stdev=1515.56 00:38:12.604 clat (msec): min=4, max=131, avg=20.22, stdev=22.96 00:38:12.604 lat (msec): min=4, max=131, avg=20.23, stdev=22.96 00:38:12.604 clat percentiles (msec): 00:38:12.604 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:38:12.604 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:38:12.604 | 70.00th=[ 10], 80.00th=[ 48], 90.00th=[ 50], 95.00th=[ 52], 00:38:12.604 | 99.00th=[ 91], 99.50th=[ 92], 99.90th=[ 132], 99.95th=[ 132], 00:38:12.604 | 99.99th=[ 132] 00:38:12.604 bw ( KiB/s): min=12800, max=27904, per=17.25%, avg=18918.40, stdev=4652.74, samples=10 00:38:12.604 iops : min= 100, max= 218, avg=147.80, stdev=36.35, samples=10 00:38:12.604 lat (msec) : 10=72.51%, 20=2.43%, 50=16.71%, 100=7.95%, 250=0.40% 00:38:12.604 cpu : usr=95.76%, sys=3.98%, ctx=8, majf=0, minf=72 00:38:12.604 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:12.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:12.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:12.604 issued rwts: total=742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:12.604 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:12.604 filename0: (groupid=0, jobs=1): err= 0: pid=4189686: Wed Nov 27 10:10:27 2024 00:38:12.604 read: IOPS=366, BW=45.8MiB/s (48.1MB/s)(229MiB/5004msec) 00:38:12.604 slat (nsec): min=5508, max=31128, avg=8224.34, stdev=1493.36 00:38:12.604 clat (usec): min=4431, max=49574, avg=8169.46, stdev=3841.23 00:38:12.604 lat (usec): min=4440, max=49605, avg=8177.68, stdev=3841.43 00:38:12.604 clat percentiles (usec): 00:38:12.604 | 1.00th=[ 4686], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6587], 00:38:12.604 | 30.00th=[ 7111], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8029], 00:38:12.604 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[10028], 95.00th=[10552], 00:38:12.604 | 99.00th=[11863], 99.50th=[46924], 99.90th=[48497], 99.95th=[49546], 00:38:12.604 | 99.99th=[49546] 00:38:12.604 bw ( KiB/s): min=43520, max=50944, per=42.78%, avg=46924.80, stdev=1889.13, samples=10 00:38:12.604 iops : min= 340, max= 398, avg=366.60, stdev=14.76, samples=10 00:38:12.604 lat (msec) : 10=89.81%, 20=9.37%, 50=0.82% 00:38:12.604 cpu : usr=93.64%, sys=6.14%, ctx=5, majf=0, minf=127 00:38:12.604 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:12.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:12.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:12.604 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:12.604 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:12.604 00:38:12.604 Run status group 0 (all jobs): 00:38:12.604 READ: bw=107MiB/s (112MB/s), 18.5MiB/s-45.8MiB/s (19.4MB/s-48.1MB/s), io=541MiB (567MB), run=5004-5047msec 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.604 bdev_null0 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:12.604 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.605 [2024-11-27 10:10:27.260113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.605 bdev_null1 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.605 bdev_null2 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:12.605 { 00:38:12.605 "params": { 00:38:12.605 "name": "Nvme$subsystem", 00:38:12.605 "trtype": "$TEST_TRANSPORT", 00:38:12.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:12.605 "adrfam": "ipv4", 00:38:12.605 "trsvcid": "$NVMF_PORT", 00:38:12.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:12.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:12.605 "hdgst": ${hdgst:-false}, 00:38:12.605 "ddgst": ${ddgst:-false} 00:38:12.605 }, 00:38:12.605 "method": "bdev_nvme_attach_controller" 00:38:12.605 } 00:38:12.605 EOF 00:38:12.605 )") 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:12.605 { 00:38:12.605 "params": { 00:38:12.605 "name": "Nvme$subsystem", 00:38:12.605 "trtype": "$TEST_TRANSPORT", 00:38:12.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:12.605 "adrfam": "ipv4", 00:38:12.605 "trsvcid": "$NVMF_PORT", 00:38:12.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:12.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:12.605 "hdgst": ${hdgst:-false}, 00:38:12.605 "ddgst": ${ddgst:-false} 00:38:12.605 }, 00:38:12.605 "method": "bdev_nvme_attach_controller" 00:38:12.605 } 00:38:12.605 EOF 00:38:12.605 )") 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:12.605 { 00:38:12.605 "params": { 00:38:12.605 "name": "Nvme$subsystem", 00:38:12.605 "trtype": "$TEST_TRANSPORT", 00:38:12.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:12.605 "adrfam": "ipv4", 00:38:12.605 "trsvcid": "$NVMF_PORT", 00:38:12.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:12.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:12.605 "hdgst": ${hdgst:-false}, 00:38:12.605 "ddgst": ${ddgst:-false} 00:38:12.605 }, 00:38:12.605 "method": "bdev_nvme_attach_controller" 00:38:12.605 } 00:38:12.605 EOF 00:38:12.605 )") 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:12.605 10:10:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:12.605 "params": { 00:38:12.605 "name": "Nvme0", 00:38:12.605 "trtype": "tcp", 00:38:12.605 "traddr": "10.0.0.2", 00:38:12.605 "adrfam": "ipv4", 00:38:12.605 "trsvcid": "4420", 00:38:12.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:12.605 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:12.605 "hdgst": false, 00:38:12.605 "ddgst": false 00:38:12.605 }, 00:38:12.605 "method": "bdev_nvme_attach_controller" 00:38:12.605 },{ 00:38:12.605 "params": { 00:38:12.605 "name": "Nvme1", 00:38:12.605 "trtype": "tcp", 00:38:12.605 "traddr": "10.0.0.2", 00:38:12.605 "adrfam": "ipv4", 00:38:12.605 "trsvcid": "4420", 00:38:12.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:12.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:12.605 "hdgst": false, 00:38:12.605 "ddgst": false 00:38:12.605 }, 00:38:12.605 "method": "bdev_nvme_attach_controller" 00:38:12.605 },{ 00:38:12.605 "params": { 00:38:12.605 "name": "Nvme2", 00:38:12.606 "trtype": "tcp", 00:38:12.606 "traddr": "10.0.0.2", 00:38:12.606 "adrfam": "ipv4", 00:38:12.606 "trsvcid": "4420", 00:38:12.606 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:12.606 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:12.606 "hdgst": false, 00:38:12.606 "ddgst": false 00:38:12.606 }, 00:38:12.606 "method": "bdev_nvme_attach_controller" 00:38:12.606 }' 00:38:12.606 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:12.606 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:12.606 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:12.606 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:12.606 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:12.606 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:12.606 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:12.606 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:12.606 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:12.606 10:10:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:12.606 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:12.606 ... 00:38:12.606 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:12.606 ... 00:38:12.606 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:12.606 ... 00:38:12.606 fio-3.35 00:38:12.606 Starting 24 threads 00:38:24.847 00:38:24.847 filename0: (groupid=0, jobs=1): err= 0: pid=4191145: Wed Nov 27 10:10:38 2024 00:38:24.847 read: IOPS=724, BW=2898KiB/s (2968kB/s)(28.4MiB/10022msec) 00:38:24.847 slat (usec): min=5, max=106, avg=14.39, stdev=14.70 00:38:24.847 clat (usec): min=6008, max=42130, avg=21973.42, stdev=4191.66 00:38:24.847 lat (usec): min=6018, max=42164, avg=21987.82, stdev=4193.71 00:38:24.847 clat percentiles (usec): 00:38:24.847 | 1.00th=[10421], 5.00th=[14353], 10.00th=[15401], 20.00th=[18744], 00:38:24.847 | 30.00th=[21890], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:38:24.847 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[26084], 00:38:24.847 | 99.00th=[34866], 99.50th=[38011], 99.90th=[39584], 99.95th=[42206], 00:38:24.847 | 99.99th=[42206] 00:38:24.847 bw ( KiB/s): min= 2688, max= 3312, per=4.40%, avg=2898.00, stdev=190.80, samples=20 00:38:24.847 iops : min= 672, max= 828, avg=724.50, stdev=47.70, samples=20 00:38:24.847 lat (msec) : 10=0.95%, 20=23.65%, 50=75.40% 00:38:24.847 cpu : usr=98.91%, sys=0.77%, ctx=27, majf=0, minf=9 00:38:24.847 IO depths : 1=3.5%, 2=7.1%, 4=16.7%, 8=63.4%, 16=9.2%, 32=0.0%, >=64=0.0% 00:38:24.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.847 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.847 issued rwts: total=7261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.847 filename0: (groupid=0, jobs=1): err= 0: pid=4191146: Wed Nov 27 10:10:38 2024 00:38:24.847 read: IOPS=688, BW=2756KiB/s (2822kB/s)(27.0MiB/10022msec) 00:38:24.847 slat (usec): min=5, max=122, avg=18.11, stdev=15.73 00:38:24.847 clat (usec): min=7095, max=41049, avg=23081.47, stdev=2648.67 00:38:24.847 lat (usec): min=7106, max=41063, avg=23099.58, stdev=2649.67 00:38:24.847 clat percentiles (usec): 00:38:24.847 | 1.00th=[13173], 5.00th=[16581], 10.00th=[21890], 20.00th=[22938], 00:38:24.847 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:38:24.847 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25297], 00:38:24.847 | 99.00th=[27395], 99.50th=[32113], 99.90th=[36439], 99.95th=[41157], 00:38:24.847 | 99.99th=[41157] 00:38:24.847 bw ( KiB/s): min= 2560, max= 2944, per=4.18%, avg=2755.20, stdev=94.67, samples=20 00:38:24.847 iops : min= 640, max= 736, avg=688.80, stdev=23.67, samples=20 00:38:24.847 lat (msec) : 10=0.38%, 20=8.30%, 50=91.32% 00:38:24.847 cpu : usr=98.77%, sys=0.90%, ctx=14, majf=0, minf=9 00:38:24.847 IO depths : 1=5.5%, 2=11.0%, 4=22.7%, 8=53.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:38:24.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.847 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.847 issued rwts: total=6904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.847 filename0: (groupid=0, jobs=1): err= 0: pid=4191147: Wed Nov 27 10:10:38 2024 00:38:24.847 read: IOPS=689, BW=2760KiB/s (2826kB/s)(27.0MiB/10009msec) 00:38:24.847 slat (usec): min=5, max=119, avg=20.49, stdev=16.43 00:38:24.847 clat (usec): min=5777, max=43715, avg=23030.10, stdev=3686.40 00:38:24.847 lat (usec): min=5783, max=43733, avg=23050.60, stdev=3688.85 00:38:24.847 clat percentiles (usec): 00:38:24.847 | 1.00th=[12387], 5.00th=[15270], 10.00th=[18482], 20.00th=[22414], 00:38:24.847 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:38:24.847 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[27657], 00:38:24.847 | 99.00th=[35914], 99.50th=[38011], 99.90th=[43779], 99.95th=[43779], 00:38:24.847 | 99.99th=[43779] 00:38:24.847 bw ( KiB/s): min= 2624, max= 3024, per=4.17%, avg=2749.47, stdev=109.47, samples=19 00:38:24.847 iops : min= 656, max= 756, avg=687.37, stdev=27.37, samples=19 00:38:24.847 lat (msec) : 10=0.48%, 20=12.89%, 50=86.63% 00:38:24.847 cpu : usr=98.86%, sys=0.81%, ctx=13, majf=0, minf=9 00:38:24.847 IO depths : 1=3.2%, 2=6.7%, 4=16.2%, 8=63.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:38:24.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 complete : 0=0.0%, 4=91.8%, 8=3.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 issued rwts: total=6906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.848 filename0: (groupid=0, jobs=1): err= 0: pid=4191148: Wed Nov 27 10:10:38 2024 00:38:24.848 read: IOPS=675, BW=2703KiB/s (2768kB/s)(26.4MiB/10002msec) 00:38:24.848 slat (usec): min=5, max=125, avg=21.36, stdev=16.14 00:38:24.848 clat (usec): min=11537, max=41128, avg=23509.00, stdev=2772.02 00:38:24.848 lat (usec): min=11543, max=41141, avg=23530.36, stdev=2773.46 00:38:24.848 clat percentiles (usec): 00:38:24.848 | 1.00th=[13829], 5.00th=[18220], 10.00th=[22414], 20.00th=[22938], 00:38:24.848 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:24.848 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[26084], 00:38:24.848 | 99.00th=[32900], 99.50th=[36963], 99.90th=[38536], 99.95th=[39060], 00:38:24.848 | 99.99th=[41157] 00:38:24.848 bw ( KiB/s): min= 2560, max= 2928, per=4.09%, avg=2697.26, stdev=86.59, samples=19 00:38:24.848 iops : min= 640, max= 732, avg=674.32, stdev=21.65, samples=19 00:38:24.848 lat (msec) : 20=6.33%, 50=93.67% 00:38:24.848 cpu : usr=98.98%, sys=0.68%, ctx=14, majf=0, minf=9 00:38:24.848 IO depths : 1=4.0%, 2=8.5%, 4=20.0%, 8=58.4%, 16=9.1%, 32=0.0%, >=64=0.0% 00:38:24.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 complete : 0=0.0%, 4=93.0%, 8=1.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 issued rwts: total=6758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.848 filename0: (groupid=0, jobs=1): err= 0: pid=4191149: Wed Nov 27 10:10:38 2024 00:38:24.848 read: IOPS=686, BW=2745KiB/s (2811kB/s)(26.8MiB/10001msec) 00:38:24.848 slat (usec): min=5, max=103, avg=19.13, stdev=15.25 00:38:24.848 clat (usec): min=8789, max=46107, avg=23168.19, stdev=3497.50 00:38:24.848 lat (usec): min=8795, max=46130, avg=23187.32, stdev=3499.11 00:38:24.848 clat percentiles (usec): 00:38:24.848 | 1.00th=[12518], 5.00th=[16188], 10.00th=[18744], 20.00th=[22414], 00:38:24.848 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:38:24.848 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25560], 95.00th=[28181], 00:38:24.848 | 99.00th=[33817], 99.50th=[35914], 99.90th=[40109], 99.95th=[45876], 00:38:24.848 | 99.99th=[45876] 00:38:24.848 bw ( KiB/s): min= 2560, max= 2928, per=4.16%, avg=2742.16, stdev=89.68, samples=19 00:38:24.848 iops : min= 640, max= 732, avg=685.53, stdev=22.43, samples=19 00:38:24.848 lat (msec) : 10=0.06%, 20=12.79%, 50=87.15% 00:38:24.848 cpu : usr=98.65%, sys=1.01%, ctx=12, majf=0, minf=9 00:38:24.848 IO depths : 1=3.1%, 2=6.4%, 4=15.7%, 8=64.5%, 16=10.3%, 32=0.0%, >=64=0.0% 00:38:24.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 complete : 0=0.0%, 4=91.7%, 8=3.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 issued rwts: total=6864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.848 filename0: (groupid=0, jobs=1): err= 0: pid=4191150: Wed Nov 27 10:10:38 2024 00:38:24.848 read: IOPS=675, BW=2702KiB/s (2766kB/s)(26.4MiB/10021msec) 00:38:24.848 slat (usec): min=5, max=127, avg=16.28, stdev=13.27 00:38:24.848 clat (usec): min=5080, max=28053, avg=23549.46, stdev=1644.26 00:38:24.848 lat (usec): min=5088, max=28061, avg=23565.74, stdev=1643.95 00:38:24.848 clat percentiles (usec): 00:38:24.848 | 1.00th=[13829], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:38:24.848 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:24.848 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:38:24.848 | 99.00th=[26608], 99.50th=[27132], 99.90th=[27919], 99.95th=[27919], 00:38:24.848 | 99.99th=[28181] 00:38:24.848 bw ( KiB/s): min= 2560, max= 2949, per=4.10%, avg=2701.05, stdev=82.80, samples=20 00:38:24.848 iops : min= 640, max= 737, avg=675.25, stdev=20.66, samples=20 00:38:24.848 lat (msec) : 10=0.24%, 20=1.54%, 50=98.23% 00:38:24.848 cpu : usr=98.95%, sys=0.71%, ctx=12, majf=0, minf=9 00:38:24.848 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:24.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.848 filename0: (groupid=0, jobs=1): err= 0: pid=4191151: Wed Nov 27 10:10:38 2024 00:38:24.848 read: IOPS=687, BW=2749KiB/s (2815kB/s)(26.9MiB/10003msec) 00:38:24.848 slat (usec): min=5, max=132, avg=20.90, stdev=17.43 00:38:24.848 clat (usec): min=11661, max=40663, avg=23115.49, stdev=3116.15 00:38:24.848 lat (usec): min=11670, max=40672, avg=23136.39, stdev=3118.33 00:38:24.848 clat percentiles (usec): 00:38:24.848 | 1.00th=[13435], 5.00th=[16057], 10.00th=[19530], 20.00th=[22676], 00:38:24.848 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:38:24.848 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[26084], 00:38:24.848 | 99.00th=[33424], 99.50th=[36439], 99.90th=[40633], 99.95th=[40633], 00:38:24.848 | 99.99th=[40633] 00:38:24.848 bw ( KiB/s): min= 2560, max= 3280, per=4.18%, avg=2752.84, stdev=167.76, samples=19 00:38:24.848 iops : min= 640, max= 820, avg=688.21, stdev=41.94, samples=19 00:38:24.848 lat (msec) : 20=11.32%, 50=88.68% 00:38:24.848 cpu : usr=98.76%, sys=0.90%, ctx=13, majf=0, minf=9 00:38:24.848 IO depths : 1=4.3%, 2=8.8%, 4=19.2%, 8=59.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:38:24.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 complete : 0=0.0%, 4=92.5%, 8=2.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 issued rwts: total=6874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.848 filename0: (groupid=0, jobs=1): err= 0: pid=4191152: Wed Nov 27 10:10:38 2024 00:38:24.848 read: IOPS=679, BW=2719KiB/s (2785kB/s)(26.6MiB/10020msec) 00:38:24.848 slat (usec): min=5, max=112, avg=20.04, stdev=16.32 00:38:24.848 clat (usec): min=8419, max=38002, avg=23376.67, stdev=2379.13 00:38:24.848 lat (usec): min=8428, max=38010, avg=23396.71, stdev=2379.59 00:38:24.848 clat percentiles (usec): 00:38:24.848 | 1.00th=[12911], 5.00th=[19268], 10.00th=[22414], 20.00th=[22938], 00:38:24.848 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:24.848 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:38:24.848 | 99.00th=[27657], 99.50th=[30540], 99.90th=[38011], 99.95th=[38011], 00:38:24.848 | 99.99th=[38011] 00:38:24.848 bw ( KiB/s): min= 2560, max= 3008, per=4.13%, avg=2718.40, stdev=91.24, samples=20 00:38:24.848 iops : min= 640, max= 752, avg=679.60, stdev=22.81, samples=20 00:38:24.848 lat (msec) : 10=0.50%, 20=5.43%, 50=94.07% 00:38:24.848 cpu : usr=98.96%, sys=0.71%, ctx=15, majf=0, minf=9 00:38:24.848 IO depths : 1=5.2%, 2=10.4%, 4=21.7%, 8=55.1%, 16=7.5%, 32=0.0%, >=64=0.0% 00:38:24.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 issued rwts: total=6812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.848 filename1: (groupid=0, jobs=1): err= 0: pid=4191153: Wed Nov 27 10:10:38 2024 00:38:24.848 read: IOPS=697, BW=2789KiB/s (2856kB/s)(27.3MiB/10012msec) 00:38:24.848 slat (usec): min=5, max=111, avg=23.11, stdev=19.73 00:38:24.848 clat (usec): min=8823, max=40986, avg=22749.89, stdev=3341.10 00:38:24.848 lat (usec): min=8831, max=40994, avg=22772.99, stdev=3344.60 00:38:24.848 clat percentiles (usec): 00:38:24.848 | 1.00th=[13698], 5.00th=[15401], 10.00th=[17695], 20.00th=[22152], 00:38:24.848 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:38:24.848 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[26084], 00:38:24.848 | 99.00th=[33162], 99.50th=[35390], 99.90th=[40109], 99.95th=[40109], 00:38:24.848 | 99.99th=[41157] 00:38:24.848 bw ( KiB/s): min= 2608, max= 3120, per=4.23%, avg=2787.37, stdev=153.33, samples=19 00:38:24.848 iops : min= 652, max= 780, avg=696.84, stdev=38.33, samples=19 00:38:24.848 lat (msec) : 10=0.09%, 20=14.99%, 50=84.93% 00:38:24.848 cpu : usr=98.79%, sys=0.77%, ctx=41, majf=0, minf=9 00:38:24.848 IO depths : 1=3.9%, 2=8.0%, 4=17.8%, 8=61.3%, 16=9.0%, 32=0.0%, >=64=0.0% 00:38:24.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 complete : 0=0.0%, 4=92.1%, 8=2.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 issued rwts: total=6980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.848 filename1: (groupid=0, jobs=1): err= 0: pid=4191154: Wed Nov 27 10:10:38 2024 00:38:24.848 read: IOPS=677, BW=2709KiB/s (2774kB/s)(26.5MiB/10001msec) 00:38:24.848 slat (usec): min=5, max=126, avg=20.11, stdev=19.20 00:38:24.848 clat (usec): min=3001, max=42328, avg=23512.11, stdev=3981.82 00:38:24.848 lat (usec): min=3006, max=42358, avg=23532.23, stdev=3983.07 00:38:24.848 clat percentiles (usec): 00:38:24.848 | 1.00th=[13435], 5.00th=[16319], 10.00th=[18744], 20.00th=[22152], 00:38:24.848 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:38:24.848 | 70.00th=[24249], 80.00th=[24773], 90.00th=[27657], 95.00th=[30278], 00:38:24.848 | 99.00th=[37487], 99.50th=[39584], 99.90th=[42206], 99.95th=[42206], 00:38:24.848 | 99.99th=[42206] 00:38:24.848 bw ( KiB/s): min= 2576, max= 2896, per=4.11%, avg=2707.37, stdev=88.09, samples=19 00:38:24.848 iops : min= 644, max= 724, avg=676.84, stdev=22.02, samples=19 00:38:24.848 lat (msec) : 4=0.09%, 10=0.34%, 20=12.99%, 50=86.58% 00:38:24.848 cpu : usr=98.95%, sys=0.73%, ctx=20, majf=0, minf=9 00:38:24.848 IO depths : 1=0.7%, 2=1.5%, 4=6.3%, 8=77.2%, 16=14.3%, 32=0.0%, >=64=0.0% 00:38:24.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 complete : 0=0.0%, 4=89.6%, 8=7.3%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.848 issued rwts: total=6774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.848 filename1: (groupid=0, jobs=1): err= 0: pid=4191155: Wed Nov 27 10:10:38 2024 00:38:24.848 read: IOPS=684, BW=2739KiB/s (2805kB/s)(26.8MiB/10020msec) 00:38:24.848 slat (usec): min=5, max=101, avg=19.22, stdev=15.07 00:38:24.848 clat (usec): min=6898, max=37597, avg=23211.80, stdev=2927.70 00:38:24.848 lat (usec): min=6905, max=37604, avg=23231.02, stdev=2929.23 00:38:24.848 clat percentiles (usec): 00:38:24.848 | 1.00th=[13042], 5.00th=[17171], 10.00th=[20841], 20.00th=[22676], 00:38:24.848 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:38:24.848 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25822], 00:38:24.849 | 99.00th=[33162], 99.50th=[35914], 99.90th=[36963], 99.95th=[37487], 00:38:24.849 | 99.99th=[37487] 00:38:24.849 bw ( KiB/s): min= 2560, max= 2976, per=4.16%, avg=2738.65, stdev=107.22, samples=20 00:38:24.849 iops : min= 640, max= 744, avg=684.65, stdev=26.78, samples=20 00:38:24.849 lat (msec) : 10=0.32%, 20=8.47%, 50=91.21% 00:38:24.849 cpu : usr=98.88%, sys=0.78%, ctx=18, majf=0, minf=9 00:38:24.849 IO depths : 1=4.2%, 2=8.3%, 4=18.8%, 8=60.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:38:24.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 complete : 0=0.0%, 4=92.4%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 issued rwts: total=6862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.849 filename1: (groupid=0, jobs=1): err= 0: pid=4191156: Wed Nov 27 10:10:38 2024 00:38:24.849 read: IOPS=684, BW=2739KiB/s (2804kB/s)(26.8MiB/10009msec) 00:38:24.849 slat (usec): min=5, max=124, avg=18.06, stdev=16.20 00:38:24.849 clat (usec): min=8096, max=53379, avg=23237.46, stdev=3803.86 00:38:24.849 lat (usec): min=8103, max=53396, avg=23255.52, stdev=3804.92 00:38:24.849 clat percentiles (usec): 00:38:24.849 | 1.00th=[12911], 5.00th=[15401], 10.00th=[18482], 20.00th=[22152], 00:38:24.849 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:38:24.849 | 70.00th=[24249], 80.00th=[24773], 90.00th=[26084], 95.00th=[29230], 00:38:24.849 | 99.00th=[35390], 99.50th=[38536], 99.90th=[43779], 99.95th=[43779], 00:38:24.849 | 99.99th=[53216] 00:38:24.849 bw ( KiB/s): min= 2456, max= 2896, per=4.14%, avg=2731.37, stdev=112.80, samples=19 00:38:24.849 iops : min= 614, max= 724, avg=682.84, stdev=28.20, samples=19 00:38:24.849 lat (msec) : 10=0.18%, 20=13.44%, 50=86.37%, 100=0.01% 00:38:24.849 cpu : usr=98.80%, sys=0.87%, ctx=15, majf=0, minf=9 00:38:24.849 IO depths : 1=2.4%, 2=4.8%, 4=12.0%, 8=69.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:38:24.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 complete : 0=0.0%, 4=90.9%, 8=5.1%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 issued rwts: total=6853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.849 filename1: (groupid=0, jobs=1): err= 0: pid=4191157: Wed Nov 27 10:10:38 2024 00:38:24.849 read: IOPS=689, BW=2759KiB/s (2825kB/s)(26.9MiB/10002msec) 00:38:24.849 slat (usec): min=5, max=120, avg=20.06, stdev=16.73 00:38:24.849 clat (usec): min=6819, max=41688, avg=23054.65, stdev=3918.00 00:38:24.849 lat (usec): min=6824, max=41695, avg=23074.72, stdev=3920.17 00:38:24.849 clat percentiles (usec): 00:38:24.849 | 1.00th=[12911], 5.00th=[15008], 10.00th=[17957], 20.00th=[21890], 00:38:24.849 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:38:24.849 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25822], 95.00th=[28705], 00:38:24.849 | 99.00th=[36439], 99.50th=[39584], 99.90th=[41681], 99.95th=[41681], 00:38:24.849 | 99.99th=[41681] 00:38:24.849 bw ( KiB/s): min= 2629, max= 2944, per=4.16%, avg=2739.63, stdev=84.73, samples=19 00:38:24.849 iops : min= 657, max= 736, avg=684.89, stdev=21.20, samples=19 00:38:24.849 lat (msec) : 10=0.26%, 20=14.42%, 50=85.31% 00:38:24.849 cpu : usr=98.80%, sys=0.86%, ctx=14, majf=0, minf=9 00:38:24.849 IO depths : 1=2.2%, 2=4.8%, 4=12.1%, 8=68.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:38:24.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 complete : 0=0.0%, 4=91.0%, 8=5.2%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 issued rwts: total=6898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.849 filename1: (groupid=0, jobs=1): err= 0: pid=4191158: Wed Nov 27 10:10:38 2024 00:38:24.849 read: IOPS=682, BW=2731KiB/s (2797kB/s)(26.7MiB/10024msec) 00:38:24.849 slat (usec): min=5, max=111, avg= 9.74, stdev= 9.64 00:38:24.849 clat (usec): min=8910, max=40371, avg=23350.14, stdev=2371.49 00:38:24.849 lat (usec): min=8916, max=40380, avg=23359.88, stdev=2370.97 00:38:24.849 clat percentiles (usec): 00:38:24.849 | 1.00th=[10683], 5.00th=[20055], 10.00th=[22676], 20.00th=[22938], 00:38:24.849 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:24.849 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[25297], 00:38:24.849 | 99.00th=[26870], 99.50th=[27132], 99.90th=[34341], 99.95th=[40109], 00:38:24.849 | 99.99th=[40633] 00:38:24.849 bw ( KiB/s): min= 2560, max= 2992, per=4.14%, avg=2731.20, stdev=94.03, samples=20 00:38:24.849 iops : min= 640, max= 748, avg=682.80, stdev=23.51, samples=20 00:38:24.849 lat (msec) : 10=0.69%, 20=4.37%, 50=94.94% 00:38:24.849 cpu : usr=98.81%, sys=0.85%, ctx=14, majf=0, minf=9 00:38:24.849 IO depths : 1=5.9%, 2=11.9%, 4=24.0%, 8=51.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:38:24.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 issued rwts: total=6844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.849 filename1: (groupid=0, jobs=1): err= 0: pid=4191159: Wed Nov 27 10:10:38 2024 00:38:24.849 read: IOPS=686, BW=2747KiB/s (2813kB/s)(26.9MiB/10022msec) 00:38:24.849 slat (usec): min=5, max=103, avg=11.73, stdev= 9.96 00:38:24.849 clat (usec): min=7702, max=40645, avg=23208.78, stdev=2810.99 00:38:24.849 lat (usec): min=7709, max=40653, avg=23220.51, stdev=2811.01 00:38:24.849 clat percentiles (usec): 00:38:24.849 | 1.00th=[12518], 5.00th=[17433], 10.00th=[21103], 20.00th=[22938], 00:38:24.849 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:38:24.849 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25822], 00:38:24.849 | 99.00th=[30540], 99.50th=[32637], 99.90th=[37487], 99.95th=[40633], 00:38:24.849 | 99.99th=[40633] 00:38:24.849 bw ( KiB/s): min= 2560, max= 2992, per=4.17%, avg=2746.40, stdev=111.01, samples=20 00:38:24.849 iops : min= 640, max= 748, avg=686.60, stdev=27.75, samples=20 00:38:24.849 lat (msec) : 10=0.58%, 20=8.12%, 50=91.30% 00:38:24.849 cpu : usr=98.83%, sys=0.84%, ctx=14, majf=0, minf=9 00:38:24.849 IO depths : 1=5.1%, 2=10.3%, 4=21.9%, 8=55.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:38:24.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 issued rwts: total=6882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.849 filename1: (groupid=0, jobs=1): err= 0: pid=4191160: Wed Nov 27 10:10:38 2024 00:38:24.849 read: IOPS=688, BW=2754KiB/s (2820kB/s)(26.9MiB/10011msec) 00:38:24.849 slat (usec): min=5, max=124, avg=22.21, stdev=18.53 00:38:24.849 clat (usec): min=9703, max=41307, avg=23051.27, stdev=3862.99 00:38:24.849 lat (usec): min=9713, max=41331, avg=23073.48, stdev=3865.09 00:38:24.849 clat percentiles (usec): 00:38:24.849 | 1.00th=[12649], 5.00th=[15401], 10.00th=[17695], 20.00th=[22152], 00:38:24.849 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:38:24.849 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25822], 95.00th=[29492], 00:38:24.849 | 99.00th=[35390], 99.50th=[36963], 99.90th=[40633], 99.95th=[41157], 00:38:24.849 | 99.99th=[41157] 00:38:24.849 bw ( KiB/s): min= 2544, max= 3152, per=4.17%, avg=2746.11, stdev=167.74, samples=19 00:38:24.849 iops : min= 636, max= 788, avg=686.53, stdev=41.94, samples=19 00:38:24.849 lat (msec) : 10=0.09%, 20=14.32%, 50=85.59% 00:38:24.849 cpu : usr=98.76%, sys=0.87%, ctx=87, majf=0, minf=9 00:38:24.849 IO depths : 1=3.4%, 2=6.9%, 4=16.2%, 8=63.8%, 16=9.6%, 32=0.0%, >=64=0.0% 00:38:24.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 complete : 0=0.0%, 4=91.7%, 8=3.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 issued rwts: total=6892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.849 filename2: (groupid=0, jobs=1): err= 0: pid=4191161: Wed Nov 27 10:10:38 2024 00:38:24.849 read: IOPS=688, BW=2755KiB/s (2821kB/s)(27.0MiB/10020msec) 00:38:24.849 slat (nsec): min=5491, max=92068, avg=20635.19, stdev=14813.98 00:38:24.849 clat (usec): min=5272, max=38548, avg=23062.97, stdev=2975.97 00:38:24.849 lat (usec): min=5281, max=38555, avg=23083.61, stdev=2978.44 00:38:24.849 clat percentiles (usec): 00:38:24.849 | 1.00th=[13042], 5.00th=[16188], 10.00th=[19530], 20.00th=[22676], 00:38:24.849 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:38:24.849 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25822], 00:38:24.849 | 99.00th=[31851], 99.50th=[33424], 99.90th=[35390], 99.95th=[38536], 00:38:24.849 | 99.99th=[38536] 00:38:24.849 bw ( KiB/s): min= 2560, max= 3264, per=4.18%, avg=2754.40, stdev=160.11, samples=20 00:38:24.849 iops : min= 640, max= 816, avg=688.60, stdev=40.03, samples=20 00:38:24.849 lat (msec) : 10=0.23%, 20=10.11%, 50=89.66% 00:38:24.849 cpu : usr=98.67%, sys=0.99%, ctx=15, majf=0, minf=9 00:38:24.849 IO depths : 1=4.8%, 2=9.6%, 4=20.6%, 8=57.2%, 16=7.9%, 32=0.0%, >=64=0.0% 00:38:24.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.849 issued rwts: total=6902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.850 filename2: (groupid=0, jobs=1): err= 0: pid=4191162: Wed Nov 27 10:10:38 2024 00:38:24.850 read: IOPS=680, BW=2721KiB/s (2787kB/s)(26.6MiB/10022msec) 00:38:24.850 slat (nsec): min=5522, max=86480, avg=12930.68, stdev=10278.97 00:38:24.850 clat (usec): min=8754, max=39242, avg=23413.76, stdev=2254.82 00:38:24.850 lat (usec): min=8760, max=39248, avg=23426.70, stdev=2254.88 00:38:24.850 clat percentiles (usec): 00:38:24.850 | 1.00th=[12125], 5.00th=[20317], 10.00th=[22414], 20.00th=[22938], 00:38:24.850 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:24.850 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25035], 95.00th=[25560], 00:38:24.850 | 99.00th=[26870], 99.50th=[27395], 99.90th=[38011], 99.95th=[38011], 00:38:24.850 | 99.99th=[39060] 00:38:24.850 bw ( KiB/s): min= 2560, max= 3120, per=4.13%, avg=2720.80, stdev=121.90, samples=20 00:38:24.850 iops : min= 640, max= 780, avg=680.20, stdev=30.48, samples=20 00:38:24.850 lat (msec) : 10=0.32%, 20=4.56%, 50=95.12% 00:38:24.850 cpu : usr=98.91%, sys=0.76%, ctx=12, majf=0, minf=9 00:38:24.850 IO depths : 1=5.6%, 2=11.4%, 4=23.7%, 8=52.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:38:24.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 issued rwts: total=6818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.850 filename2: (groupid=0, jobs=1): err= 0: pid=4191163: Wed Nov 27 10:10:38 2024 00:38:24.850 read: IOPS=700, BW=2804KiB/s (2871kB/s)(27.4MiB/10001msec) 00:38:24.850 slat (nsec): min=5473, max=97217, avg=11888.74, stdev=10260.88 00:38:24.850 clat (usec): min=8566, max=54725, avg=22765.31, stdev=4448.21 00:38:24.850 lat (usec): min=8573, max=54750, avg=22777.20, stdev=4449.85 00:38:24.850 clat percentiles (usec): 00:38:24.850 | 1.00th=[12911], 5.00th=[14353], 10.00th=[16057], 20.00th=[19530], 00:38:24.850 | 30.00th=[22414], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:38:24.850 | 70.00th=[24249], 80.00th=[24511], 90.00th=[26084], 95.00th=[29754], 00:38:24.850 | 99.00th=[38011], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:38:24.850 | 99.99th=[54789] 00:38:24.850 bw ( KiB/s): min= 2528, max= 2976, per=4.22%, avg=2780.63, stdev=112.96, samples=19 00:38:24.850 iops : min= 632, max= 744, avg=695.16, stdev=28.24, samples=19 00:38:24.850 lat (msec) : 10=0.17%, 20=20.96%, 50=78.84%, 100=0.03% 00:38:24.850 cpu : usr=98.71%, sys=0.96%, ctx=13, majf=0, minf=9 00:38:24.850 IO depths : 1=0.2%, 2=1.3%, 4=6.8%, 8=76.6%, 16=15.1%, 32=0.0%, >=64=0.0% 00:38:24.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 complete : 0=0.0%, 4=89.9%, 8=7.2%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 issued rwts: total=7010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.850 filename2: (groupid=0, jobs=1): err= 0: pid=4191164: Wed Nov 27 10:10:38 2024 00:38:24.850 read: IOPS=708, BW=2834KiB/s (2902kB/s)(27.7MiB/10010msec) 00:38:24.850 slat (usec): min=5, max=124, avg=20.17, stdev=17.73 00:38:24.850 clat (usec): min=9144, max=45305, avg=22444.11, stdev=4203.59 00:38:24.850 lat (usec): min=9153, max=45314, avg=22464.28, stdev=4206.93 00:38:24.850 clat percentiles (usec): 00:38:24.850 | 1.00th=[12780], 5.00th=[14484], 10.00th=[16057], 20.00th=[19006], 00:38:24.850 | 30.00th=[22152], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:38:24.850 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25560], 95.00th=[29492], 00:38:24.850 | 99.00th=[34866], 99.50th=[36963], 99.90th=[40109], 99.95th=[45351], 00:38:24.850 | 99.99th=[45351] 00:38:24.850 bw ( KiB/s): min= 2640, max= 3120, per=4.30%, avg=2830.32, stdev=139.70, samples=19 00:38:24.850 iops : min= 660, max= 780, avg=707.58, stdev=34.93, samples=19 00:38:24.850 lat (msec) : 10=0.11%, 20=23.42%, 50=76.47% 00:38:24.850 cpu : usr=98.99%, sys=0.69%, ctx=19, majf=0, minf=9 00:38:24.850 IO depths : 1=2.1%, 2=4.3%, 4=11.9%, 8=70.4%, 16=11.4%, 32=0.0%, >=64=0.0% 00:38:24.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 complete : 0=0.0%, 4=90.6%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 issued rwts: total=7092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.850 filename2: (groupid=0, jobs=1): err= 0: pid=4191165: Wed Nov 27 10:10:38 2024 00:38:24.850 read: IOPS=671, BW=2685KiB/s (2749kB/s)(26.2MiB/10003msec) 00:38:24.850 slat (usec): min=5, max=127, avg=21.89, stdev=20.76 00:38:24.850 clat (usec): min=8537, max=42062, avg=23686.13, stdev=3648.87 00:38:24.850 lat (usec): min=8543, max=42078, avg=23708.02, stdev=3649.62 00:38:24.850 clat percentiles (usec): 00:38:24.850 | 1.00th=[13566], 5.00th=[16909], 10.00th=[20317], 20.00th=[22676], 00:38:24.850 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:38:24.850 | 70.00th=[24249], 80.00th=[24511], 90.00th=[27132], 95.00th=[30278], 00:38:24.850 | 99.00th=[37487], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:38:24.850 | 99.99th=[42206] 00:38:24.850 bw ( KiB/s): min= 2496, max= 2896, per=4.06%, avg=2678.74, stdev=89.93, samples=19 00:38:24.850 iops : min= 624, max= 724, avg=669.68, stdev=22.48, samples=19 00:38:24.850 lat (msec) : 10=0.21%, 20=9.34%, 50=90.45% 00:38:24.850 cpu : usr=99.08%, sys=0.60%, ctx=36, majf=0, minf=9 00:38:24.850 IO depths : 1=1.9%, 2=3.9%, 4=10.1%, 8=71.3%, 16=12.8%, 32=0.0%, >=64=0.0% 00:38:24.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 complete : 0=0.0%, 4=90.6%, 8=5.9%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 issued rwts: total=6714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.850 filename2: (groupid=0, jobs=1): err= 0: pid=4191166: Wed Nov 27 10:10:38 2024 00:38:24.850 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10004msec) 00:38:24.850 slat (usec): min=5, max=182, avg=26.07, stdev=20.10 00:38:24.850 clat (usec): min=10455, max=36357, avg=23614.42, stdev=1228.01 00:38:24.850 lat (usec): min=10468, max=36365, avg=23640.49, stdev=1228.65 00:38:24.850 clat percentiles (usec): 00:38:24.850 | 1.00th=[21103], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:38:24.850 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:38:24.850 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:38:24.850 | 99.00th=[26608], 99.50th=[27132], 99.90th=[30802], 99.95th=[31589], 00:38:24.850 | 99.99th=[36439] 00:38:24.850 bw ( KiB/s): min= 2560, max= 2816, per=4.07%, avg=2681.26, stdev=67.11, samples=19 00:38:24.850 iops : min= 640, max= 704, avg=670.32, stdev=16.78, samples=19 00:38:24.850 lat (msec) : 20=0.87%, 50=99.13% 00:38:24.850 cpu : usr=99.12%, sys=0.57%, ctx=15, majf=0, minf=9 00:38:24.850 IO depths : 1=5.8%, 2=11.7%, 4=24.1%, 8=51.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:38:24.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.850 filename2: (groupid=0, jobs=1): err= 0: pid=4191167: Wed Nov 27 10:10:38 2024 00:38:24.850 read: IOPS=696, BW=2786KiB/s (2853kB/s)(27.2MiB/10001msec) 00:38:24.850 slat (usec): min=5, max=134, avg=18.73, stdev=17.40 00:38:24.850 clat (usec): min=8845, max=41076, avg=22824.11, stdev=3901.95 00:38:24.850 lat (usec): min=8852, max=41089, avg=22842.84, stdev=3903.97 00:38:24.850 clat percentiles (usec): 00:38:24.850 | 1.00th=[12518], 5.00th=[14877], 10.00th=[16712], 20.00th=[21627], 00:38:24.850 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:38:24.850 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25297], 95.00th=[28967], 00:38:24.850 | 99.00th=[34341], 99.50th=[36963], 99.90th=[40109], 99.95th=[41157], 00:38:24.850 | 99.99th=[41157] 00:38:24.850 bw ( KiB/s): min= 2533, max= 2992, per=4.23%, avg=2785.11, stdev=133.17, samples=19 00:38:24.850 iops : min= 633, max= 748, avg=696.26, stdev=33.32, samples=19 00:38:24.850 lat (msec) : 10=0.11%, 20=16.51%, 50=83.38% 00:38:24.850 cpu : usr=98.80%, sys=0.85%, ctx=24, majf=0, minf=9 00:38:24.850 IO depths : 1=2.6%, 2=5.3%, 4=12.9%, 8=67.9%, 16=11.3%, 32=0.0%, >=64=0.0% 00:38:24.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 complete : 0=0.0%, 4=91.0%, 8=4.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 issued rwts: total=6966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.850 filename2: (groupid=0, jobs=1): err= 0: pid=4191168: Wed Nov 27 10:10:38 2024 00:38:24.850 read: IOPS=677, BW=2711KiB/s (2777kB/s)(26.5MiB/10002msec) 00:38:24.850 slat (usec): min=5, max=129, avg=24.10, stdev=20.64 00:38:24.850 clat (usec): min=10195, max=56156, avg=23384.89, stdev=3104.08 00:38:24.850 lat (usec): min=10208, max=56175, avg=23408.98, stdev=3105.02 00:38:24.850 clat percentiles (usec): 00:38:24.850 | 1.00th=[12649], 5.00th=[17171], 10.00th=[22152], 20.00th=[22676], 00:38:24.850 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:38:24.850 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25822], 00:38:24.850 | 99.00th=[35914], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:38:24.850 | 99.99th=[56361] 00:38:24.850 bw ( KiB/s): min= 2480, max= 2960, per=4.11%, avg=2706.53, stdev=97.95, samples=19 00:38:24.850 iops : min= 620, max= 740, avg=676.63, stdev=24.49, samples=19 00:38:24.850 lat (msec) : 20=7.71%, 50=92.27%, 100=0.01% 00:38:24.850 cpu : usr=99.02%, sys=0.66%, ctx=15, majf=0, minf=9 00:38:24.850 IO depths : 1=4.0%, 2=8.6%, 4=19.3%, 8=58.8%, 16=9.3%, 32=0.0%, >=64=0.0% 00:38:24.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 complete : 0=0.0%, 4=92.7%, 8=2.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.850 issued rwts: total=6780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:24.850 00:38:24.850 Run status group 0 (all jobs): 00:38:24.850 READ: bw=64.3MiB/s (67.5MB/s), 2681KiB/s-2898KiB/s (2745kB/s-2968kB/s), io=645MiB (676MB), run=10001-10024msec 00:38:24.850 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:24.850 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:24.850 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:24.850 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:24.850 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:24.850 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:24.850 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 bdev_null0 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 [2024-11-27 10:10:39.030330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 bdev_null1 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:24.851 { 00:38:24.851 "params": { 00:38:24.851 "name": "Nvme$subsystem", 00:38:24.851 "trtype": "$TEST_TRANSPORT", 00:38:24.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.851 "adrfam": "ipv4", 00:38:24.851 "trsvcid": "$NVMF_PORT", 00:38:24.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.851 "hdgst": ${hdgst:-false}, 00:38:24.851 "ddgst": ${ddgst:-false} 00:38:24.851 }, 00:38:24.851 "method": "bdev_nvme_attach_controller" 00:38:24.851 } 00:38:24.851 EOF 00:38:24.851 )") 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.851 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:24.852 { 00:38:24.852 "params": { 00:38:24.852 "name": "Nvme$subsystem", 00:38:24.852 "trtype": "$TEST_TRANSPORT", 00:38:24.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.852 "adrfam": "ipv4", 00:38:24.852 "trsvcid": "$NVMF_PORT", 00:38:24.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.852 "hdgst": ${hdgst:-false}, 00:38:24.852 "ddgst": ${ddgst:-false} 00:38:24.852 }, 00:38:24.852 "method": "bdev_nvme_attach_controller" 00:38:24.852 } 00:38:24.852 EOF 00:38:24.852 )") 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:24.852 "params": { 00:38:24.852 "name": "Nvme0", 00:38:24.852 "trtype": "tcp", 00:38:24.852 "traddr": "10.0.0.2", 00:38:24.852 "adrfam": "ipv4", 00:38:24.852 "trsvcid": "4420", 00:38:24.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.852 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:24.852 "hdgst": false, 00:38:24.852 "ddgst": false 00:38:24.852 }, 00:38:24.852 "method": "bdev_nvme_attach_controller" 00:38:24.852 },{ 00:38:24.852 "params": { 00:38:24.852 "name": "Nvme1", 00:38:24.852 "trtype": "tcp", 00:38:24.852 "traddr": "10.0.0.2", 00:38:24.852 "adrfam": "ipv4", 00:38:24.852 "trsvcid": "4420", 00:38:24.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:24.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:24.852 "hdgst": false, 00:38:24.852 "ddgst": false 00:38:24.852 }, 00:38:24.852 "method": "bdev_nvme_attach_controller" 00:38:24.852 }' 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:24.852 10:10:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.852 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:24.852 ... 00:38:24.852 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:24.852 ... 00:38:24.852 fio-3.35 00:38:24.852 Starting 4 threads 00:38:30.141 00:38:30.141 filename0: (groupid=0, jobs=1): err= 0: pid=4193504: Wed Nov 27 10:10:45 2024 00:38:30.141 read: IOPS=3034, BW=23.7MiB/s (24.9MB/s)(119MiB/5002msec) 00:38:30.141 slat (nsec): min=5454, max=50677, avg=5849.33, stdev=1095.21 00:38:30.141 clat (usec): min=1213, max=4475, avg=2620.08, stdev=342.50 00:38:30.141 lat (usec): min=1220, max=4481, avg=2625.93, stdev=342.51 00:38:30.141 clat percentiles (usec): 00:38:30.141 | 1.00th=[ 1975], 5.00th=[ 2057], 10.00th=[ 2212], 20.00th=[ 2311], 00:38:30.141 | 30.00th=[ 2442], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:38:30.141 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2933], 95.00th=[ 3425], 00:38:30.141 | 99.00th=[ 3654], 99.50th=[ 3654], 99.90th=[ 4047], 99.95th=[ 4228], 00:38:30.141 | 99.99th=[ 4490] 00:38:30.141 bw ( KiB/s): min=23600, max=24752, per=25.99%, avg=24279.11, stdev=434.74, samples=9 00:38:30.141 iops : min= 2950, max= 3094, avg=3034.89, stdev=54.34, samples=9 00:38:30.141 lat (msec) : 2=1.15%, 4=98.74%, 10=0.11% 00:38:30.141 cpu : usr=97.04%, sys=2.72%, ctx=7, majf=0, minf=88 00:38:30.141 IO depths : 1=0.1%, 2=0.8%, 4=70.4%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:30.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.141 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.141 issued rwts: total=15181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:30.141 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:30.141 filename0: (groupid=0, jobs=1): err= 0: pid=4193505: Wed Nov 27 10:10:45 2024 00:38:30.141 read: IOPS=2882, BW=22.5MiB/s (23.6MB/s)(113MiB/5002msec) 00:38:30.141 slat (nsec): min=5469, max=76403, avg=8108.85, stdev=2767.10 00:38:30.141 clat (usec): min=1743, max=4783, avg=2752.36, stdev=182.28 00:38:30.141 lat (usec): min=1752, max=4789, avg=2760.47, stdev=182.26 00:38:30.141 clat percentiles (usec): 00:38:30.141 | 1.00th=[ 2311], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2671], 00:38:30.141 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:38:30.141 | 70.00th=[ 2769], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3032], 00:38:30.141 | 99.00th=[ 3359], 99.50th=[ 3621], 99.90th=[ 4146], 99.95th=[ 4359], 00:38:30.141 | 99.99th=[ 4752] 00:38:30.141 bw ( KiB/s): min=22960, max=23280, per=24.70%, avg=23077.33, stdev=104.61, samples=9 00:38:30.141 iops : min= 2870, max= 2910, avg=2884.67, stdev=13.08, samples=9 00:38:30.141 lat (msec) : 2=0.04%, 4=99.79%, 10=0.17% 00:38:30.141 cpu : usr=96.54%, sys=3.22%, ctx=6, majf=0, minf=102 00:38:30.141 IO depths : 1=0.1%, 2=0.8%, 4=72.2%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:30.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.141 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.141 issued rwts: total=14420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:30.141 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:30.141 filename1: (groupid=0, jobs=1): err= 0: pid=4193507: Wed Nov 27 10:10:45 2024 00:38:30.141 read: IOPS=2867, BW=22.4MiB/s (23.5MB/s)(112MiB/5001msec) 00:38:30.141 slat (nsec): min=5459, max=57732, avg=8084.07, stdev=2851.42 00:38:30.141 clat (usec): min=955, max=5505, avg=2767.91, stdev=211.28 00:38:30.141 lat (usec): min=961, max=5532, avg=2776.00, stdev=211.34 00:38:30.141 clat percentiles (usec): 00:38:30.141 | 1.00th=[ 2442], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2704], 00:38:30.141 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:38:30.141 | 70.00th=[ 2737], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 2999], 00:38:30.141 | 99.00th=[ 3621], 99.50th=[ 4080], 99.90th=[ 4752], 99.95th=[ 4948], 00:38:30.141 | 99.99th=[ 5473] 00:38:30.141 bw ( KiB/s): min=22864, max=23072, per=24.57%, avg=22952.89, stdev=80.84, samples=9 00:38:30.141 iops : min= 2858, max= 2884, avg=2869.11, stdev=10.11, samples=9 00:38:30.141 lat (usec) : 1000=0.02% 00:38:30.141 lat (msec) : 2=0.10%, 4=99.24%, 10=0.64% 00:38:30.141 cpu : usr=96.40%, sys=3.34%, ctx=5, majf=0, minf=77 00:38:30.141 IO depths : 1=0.1%, 2=0.1%, 4=73.8%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:30.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.141 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.141 issued rwts: total=14338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:30.141 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:30.141 filename1: (groupid=0, jobs=1): err= 0: pid=4193508: Wed Nov 27 10:10:45 2024 00:38:30.141 read: IOPS=2895, BW=22.6MiB/s (23.7MB/s)(113MiB/5001msec) 00:38:30.141 slat (nsec): min=5458, max=67646, avg=8541.54, stdev=2527.93 00:38:30.141 clat (usec): min=1162, max=4299, avg=2743.42, stdev=198.72 00:38:30.141 lat (usec): min=1168, max=4324, avg=2751.96, stdev=198.78 00:38:30.141 clat percentiles (usec): 00:38:30.141 | 1.00th=[ 2114], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2671], 00:38:30.141 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:38:30.141 | 70.00th=[ 2769], 80.00th=[ 2868], 90.00th=[ 2966], 95.00th=[ 3032], 00:38:30.141 | 99.00th=[ 3359], 99.50th=[ 3621], 99.90th=[ 3851], 99.95th=[ 4228], 00:38:30.141 | 99.99th=[ 4293] 00:38:30.141 bw ( KiB/s): min=22960, max=23744, per=24.78%, avg=23150.33, stdev=247.24, samples=9 00:38:30.141 iops : min= 2870, max= 2968, avg=2893.78, stdev=30.91, samples=9 00:38:30.141 lat (msec) : 2=0.56%, 4=99.36%, 10=0.08% 00:38:30.141 cpu : usr=95.76%, sys=3.98%, ctx=6, majf=0, minf=57 00:38:30.141 IO depths : 1=0.1%, 2=0.1%, 4=67.4%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:30.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.141 complete : 0=0.0%, 4=96.2%, 8=3.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:30.141 issued rwts: total=14478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:30.141 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:30.141 00:38:30.141 Run status group 0 (all jobs): 00:38:30.141 READ: bw=91.2MiB/s (95.7MB/s), 22.4MiB/s-23.7MiB/s (23.5MB/s-24.9MB/s), io=456MiB (479MB), run=5001-5002msec 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.142 00:38:30.142 real 0m24.429s 00:38:30.142 user 5m16.784s 00:38:30.142 sys 0m4.435s 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:30.142 10:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:30.142 ************************************ 00:38:30.142 END TEST fio_dif_rand_params 00:38:30.142 ************************************ 00:38:30.142 10:10:45 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:30.142 10:10:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:30.142 10:10:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:30.142 10:10:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:30.142 ************************************ 00:38:30.142 START TEST fio_dif_digest 00:38:30.142 ************************************ 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:30.142 bdev_null0 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:30.142 [2024-11-27 10:10:45.492768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:30.142 { 00:38:30.142 "params": { 00:38:30.142 "name": "Nvme$subsystem", 00:38:30.142 "trtype": "$TEST_TRANSPORT", 00:38:30.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:30.142 "adrfam": "ipv4", 00:38:30.142 "trsvcid": "$NVMF_PORT", 00:38:30.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:30.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:30.142 "hdgst": ${hdgst:-false}, 00:38:30.142 "ddgst": ${ddgst:-false} 00:38:30.142 }, 00:38:30.142 "method": "bdev_nvme_attach_controller" 00:38:30.142 } 00:38:30.142 EOF 00:38:30.142 )") 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:30.142 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:30.143 "params": { 00:38:30.143 "name": "Nvme0", 00:38:30.143 "trtype": "tcp", 00:38:30.143 "traddr": "10.0.0.2", 00:38:30.143 "adrfam": "ipv4", 00:38:30.143 "trsvcid": "4420", 00:38:30.143 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:30.143 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:30.143 "hdgst": true, 00:38:30.143 "ddgst": true 00:38:30.143 }, 00:38:30.143 "method": "bdev_nvme_attach_controller" 00:38:30.143 }' 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:30.143 10:10:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:30.735 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:30.735 ... 00:38:30.735 fio-3.35 00:38:30.735 Starting 3 threads 00:38:42.963 00:38:42.963 filename0: (groupid=0, jobs=1): err= 0: pid=1448: Wed Nov 27 10:10:56 2024 00:38:42.963 read: IOPS=368, BW=46.1MiB/s (48.3MB/s)(463MiB/10045msec) 00:38:42.963 slat (nsec): min=5889, max=31950, avg=6619.27, stdev=1043.12 00:38:42.963 clat (usec): min=5077, max=51755, avg=8118.12, stdev=2333.60 00:38:42.963 lat (usec): min=5083, max=51762, avg=8124.74, stdev=2333.63 00:38:42.963 clat percentiles (usec): 00:38:42.963 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6325], 20.00th=[ 6718], 00:38:42.963 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8160], 60.00th=[ 8455], 00:38:42.963 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[ 9634], 95.00th=[10159], 00:38:42.963 | 99.00th=[11207], 99.50th=[11731], 99.90th=[51119], 99.95th=[51643], 00:38:42.963 | 99.99th=[51643] 00:38:42.963 bw ( KiB/s): min=41984, max=52480, per=42.06%, avg=47372.80, stdev=2822.09, samples=20 00:38:42.963 iops : min= 328, max= 410, avg=370.10, stdev=22.05, samples=20 00:38:42.963 lat (msec) : 10=94.27%, 20=5.51%, 50=0.05%, 100=0.16% 00:38:42.963 cpu : usr=93.33%, sys=6.43%, ctx=20, majf=0, minf=130 00:38:42.963 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:42.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.963 issued rwts: total=3703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.963 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:42.963 filename0: (groupid=0, jobs=1): err= 0: pid=1449: Wed Nov 27 10:10:56 2024 00:38:42.963 read: IOPS=182, BW=22.8MiB/s (23.9MB/s)(229MiB/10037msec) 00:38:42.963 slat (nsec): min=5917, max=46136, avg=6789.37, stdev=1367.83 00:38:42.963 clat (usec): min=6895, max=94973, avg=16426.15, stdev=15697.60 00:38:42.963 lat (usec): min=6901, max=94980, avg=16432.94, stdev=15697.58 00:38:42.963 clat percentiles (usec): 00:38:42.963 | 1.00th=[ 8094], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9634], 00:38:42.963 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:38:42.963 | 70.00th=[11076], 80.00th=[11469], 90.00th=[50594], 95.00th=[51643], 00:38:42.963 | 99.00th=[53740], 99.50th=[91751], 99.90th=[93848], 99.95th=[94897], 00:38:42.963 | 99.99th=[94897] 00:38:42.963 bw ( KiB/s): min=12800, max=31232, per=20.79%, avg=23411.20, stdev=4793.26, samples=20 00:38:42.963 iops : min= 100, max= 244, avg=182.90, stdev=37.45, samples=20 00:38:42.963 lat (msec) : 10=34.17%, 20=51.53%, 50=2.18%, 100=12.12% 00:38:42.963 cpu : usr=95.62%, sys=4.16%, ctx=22, majf=0, minf=63 00:38:42.963 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:42.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.963 issued rwts: total=1832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.963 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:42.963 filename0: (groupid=0, jobs=1): err= 0: pid=1450: Wed Nov 27 10:10:56 2024 00:38:42.963 read: IOPS=328, BW=41.1MiB/s (43.1MB/s)(413MiB/10046msec) 00:38:42.963 slat (nsec): min=5884, max=32097, avg=6639.25, stdev=1065.00 00:38:42.963 clat (usec): min=4955, max=46084, avg=9100.68, stdev=1699.13 00:38:42.963 lat (usec): min=4961, max=46090, avg=9107.32, stdev=1699.25 00:38:42.963 clat percentiles (usec): 00:38:42.963 | 1.00th=[ 6390], 5.00th=[ 6980], 10.00th=[ 7242], 20.00th=[ 7635], 00:38:42.963 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9634], 00:38:42.963 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10945], 95.00th=[11338], 00:38:42.963 | 99.00th=[11994], 99.50th=[12387], 99.90th=[14615], 99.95th=[45351], 00:38:42.963 | 99.99th=[45876] 00:38:42.963 bw ( KiB/s): min=39680, max=46848, per=37.53%, avg=42265.60, stdev=1913.75, samples=20 00:38:42.963 iops : min= 310, max= 366, avg=330.20, stdev=14.95, samples=20 00:38:42.963 lat (msec) : 10=68.67%, 20=31.27%, 50=0.06% 00:38:42.963 cpu : usr=94.33%, sys=5.45%, ctx=19, majf=0, minf=205 00:38:42.963 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:42.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.963 issued rwts: total=3304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.963 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:42.963 00:38:42.963 Run status group 0 (all jobs): 00:38:42.963 READ: bw=110MiB/s (115MB/s), 22.8MiB/s-46.1MiB/s (23.9MB/s-48.3MB/s), io=1105MiB (1159MB), run=10037-10046msec 00:38:42.963 10:10:56 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:42.963 10:10:56 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:42.963 10:10:56 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:42.963 10:10:56 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:42.963 10:10:56 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:42.963 10:10:56 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:42.963 10:10:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.963 10:10:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:42.963 10:10:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.963 10:10:56 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:42.964 10:10:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.964 10:10:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:42.964 10:10:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.964 00:38:42.964 real 0m11.187s 00:38:42.964 user 0m44.634s 00:38:42.964 sys 0m1.939s 00:38:42.964 10:10:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:42.964 10:10:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:42.964 ************************************ 00:38:42.964 END TEST fio_dif_digest 00:38:42.964 ************************************ 00:38:42.964 10:10:56 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:42.964 10:10:56 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:42.964 10:10:56 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:42.964 10:10:56 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:38:42.964 10:10:56 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:42.964 10:10:56 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:38:42.964 10:10:56 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:42.964 10:10:56 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:42.964 rmmod nvme_tcp 00:38:42.964 rmmod nvme_fabrics 00:38:42.964 rmmod nvme_keyring 00:38:42.964 10:10:56 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:42.964 10:10:56 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:38:42.964 10:10:56 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:38:42.964 10:10:56 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 4184502 ']' 00:38:42.964 10:10:56 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 4184502 00:38:42.964 10:10:56 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 4184502 ']' 00:38:42.964 10:10:56 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 4184502 00:38:42.964 10:10:56 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:38:42.964 10:10:56 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:42.964 10:10:56 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4184502 00:38:42.964 10:10:56 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:42.964 10:10:56 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:42.964 10:10:56 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4184502' 00:38:42.964 killing process with pid 4184502 00:38:42.964 10:10:56 nvmf_dif -- common/autotest_common.sh@973 -- # kill 4184502 00:38:42.964 10:10:56 nvmf_dif -- common/autotest_common.sh@978 -- # wait 4184502 00:38:42.964 10:10:56 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:42.964 10:10:56 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:44.890 Waiting for block devices as requested 00:38:44.890 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:45.151 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:45.151 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:45.151 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:45.412 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:45.412 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:45.412 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:45.672 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:45.672 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:45.933 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:45.933 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:45.933 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:46.194 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:46.194 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:46.194 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:46.454 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:46.454 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:46.715 10:11:02 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:46.715 10:11:02 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:46.715 10:11:02 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:46.715 10:11:02 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:38:46.715 10:11:02 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:46.715 10:11:02 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:38:46.715 10:11:02 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:46.715 10:11:02 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:46.715 10:11:02 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.715 10:11:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:46.715 10:11:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:49.261 10:11:04 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:49.261 00:38:49.261 real 1m18.621s 00:38:49.261 user 7m58.164s 00:38:49.261 sys 0m22.200s 00:38:49.261 10:11:04 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:49.261 10:11:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:49.261 ************************************ 00:38:49.261 END TEST nvmf_dif 00:38:49.261 ************************************ 00:38:49.261 10:11:04 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:49.261 10:11:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:49.261 10:11:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:49.261 10:11:04 -- common/autotest_common.sh@10 -- # set +x 00:38:49.261 ************************************ 00:38:49.261 START TEST nvmf_abort_qd_sizes 00:38:49.261 ************************************ 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:49.261 * Looking for test storage... 00:38:49.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:49.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.261 --rc genhtml_branch_coverage=1 00:38:49.261 --rc genhtml_function_coverage=1 00:38:49.261 --rc genhtml_legend=1 00:38:49.261 --rc geninfo_all_blocks=1 00:38:49.261 --rc geninfo_unexecuted_blocks=1 00:38:49.261 00:38:49.261 ' 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:49.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.261 --rc genhtml_branch_coverage=1 00:38:49.261 --rc genhtml_function_coverage=1 00:38:49.261 --rc genhtml_legend=1 00:38:49.261 --rc geninfo_all_blocks=1 00:38:49.261 --rc geninfo_unexecuted_blocks=1 00:38:49.261 00:38:49.261 ' 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:49.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.261 --rc genhtml_branch_coverage=1 00:38:49.261 --rc genhtml_function_coverage=1 00:38:49.261 --rc genhtml_legend=1 00:38:49.261 --rc geninfo_all_blocks=1 00:38:49.261 --rc geninfo_unexecuted_blocks=1 00:38:49.261 00:38:49.261 ' 00:38:49.261 10:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:49.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.262 --rc genhtml_branch_coverage=1 00:38:49.262 --rc genhtml_function_coverage=1 00:38:49.262 --rc genhtml_legend=1 00:38:49.262 --rc geninfo_all_blocks=1 00:38:49.262 --rc geninfo_unexecuted_blocks=1 00:38:49.262 00:38:49.262 ' 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:49.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:38:49.262 10:11:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:57.406 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:57.406 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:57.406 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:57.406 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:57.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:57.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:38:57.406 00:38:57.406 --- 10.0.0.2 ping statistics --- 00:38:57.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.406 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:57.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:57.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:38:57.406 00:38:57.406 --- 10.0.0.1 ping statistics --- 00:38:57.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.406 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:57.406 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:57.407 10:11:11 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:59.980 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:59.980 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=11083 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 11083 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 11083 ']' 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:00.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:00.344 10:11:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:00.344 [2024-11-27 10:11:15.735262] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:39:00.344 [2024-11-27 10:11:15.735328] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:00.716 [2024-11-27 10:11:15.835869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:00.716 [2024-11-27 10:11:15.890723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:00.716 [2024-11-27 10:11:15.890779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:00.716 [2024-11-27 10:11:15.890791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:00.716 [2024-11-27 10:11:15.890802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:00.716 [2024-11-27 10:11:15.890810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:00.716 [2024-11-27 10:11:15.892894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:00.716 [2024-11-27 10:11:15.892930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:00.716 [2024-11-27 10:11:15.893066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.716 [2024-11-27 10:11:15.893066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:01.287 10:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:01.287 ************************************ 00:39:01.287 START TEST spdk_target_abort 00:39:01.287 ************************************ 00:39:01.287 10:11:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:39:01.287 10:11:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:01.287 10:11:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:39:01.287 10:11:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.288 10:11:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:01.548 spdk_targetn1 00:39:01.548 10:11:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.548 10:11:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:01.548 10:11:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.548 10:11:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:01.548 [2024-11-27 10:11:16.984772] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:01.548 10:11:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.548 10:11:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:01.548 10:11:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.548 10:11:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:01.548 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.548 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:01.548 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.548 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:01.809 [2024-11-27 10:11:17.033210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:01.809 10:11:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:01.809 [2024-11-27 10:11:17.274359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:40 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:39:02.071 [2024-11-27 10:11:17.274412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:39:02.071 [2024-11-27 10:11:17.307752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1008 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:39:02.071 [2024-11-27 10:11:17.307788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0081 p:1 m:0 dnr:0 00:39:02.071 [2024-11-27 10:11:17.329761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1640 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:39:02.071 [2024-11-27 10:11:17.329792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00cf p:1 m:0 dnr:0 00:39:02.071 [2024-11-27 10:11:17.353663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2352 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:39:02.071 [2024-11-27 10:11:17.353694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:02.071 [2024-11-27 10:11:17.369682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2824 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:39:02.071 [2024-11-27 10:11:17.369712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:39:02.071 [2024-11-27 10:11:17.394710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3616 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:39:02.071 [2024-11-27 10:11:17.394742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00c6 p:0 m:0 dnr:0 00:39:02.071 [2024-11-27 10:11:17.402906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3840 len:8 PRP1 0x200004abe000 PRP2 0x0 00:39:02.071 [2024-11-27 10:11:17.402934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00e4 p:0 m:0 dnr:0 00:39:05.369 Initializing NVMe Controllers 00:39:05.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:05.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:05.369 Initialization complete. Launching workers. 00:39:05.369 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12017, failed: 7 00:39:05.369 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2598, failed to submit 9426 00:39:05.369 success 785, unsuccessful 1813, failed 0 00:39:05.369 10:11:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:05.369 10:11:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:05.369 [2024-11-27 10:11:20.451504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:488 len:8 PRP1 0x200004e50000 PRP2 0x0 00:39:05.369 [2024-11-27 10:11:20.451545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:39:05.369 [2024-11-27 10:11:20.497257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:1672 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:39:05.369 [2024-11-27 10:11:20.497284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:00d2 p:1 m:0 dnr:0 00:39:05.369 [2024-11-27 10:11:20.528287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:2344 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:39:05.369 [2024-11-27 10:11:20.528313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:05.369 [2024-11-27 10:11:20.544040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:2616 len:8 PRP1 0x200004e42000 PRP2 0x0 00:39:05.369 [2024-11-27 10:11:20.544063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:39:05.369 [2024-11-27 10:11:20.560193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:2976 len:8 PRP1 0x200004e42000 PRP2 0x0 00:39:05.369 [2024-11-27 10:11:20.560216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:39:05.369 [2024-11-27 10:11:20.592234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:3752 len:8 PRP1 0x200004e40000 PRP2 0x0 00:39:05.369 [2024-11-27 10:11:20.592258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:00da p:0 m:0 dnr:0 00:39:05.629 [2024-11-27 10:11:20.895439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:10464 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:39:05.629 [2024-11-27 10:11:20.895469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:39:08.177 Initializing NVMe Controllers 00:39:08.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:08.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:08.177 Initialization complete. Launching workers. 00:39:08.177 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8508, failed: 7 00:39:08.177 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 7288 00:39:08.177 success 311, unsuccessful 916, failed 0 00:39:08.177 10:11:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:08.177 10:11:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:10.724 [2024-11-27 10:11:25.790535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:162 nsid:1 lba:237056 len:8 PRP1 0x200004b16000 PRP2 0x0 00:39:10.724 [2024-11-27 10:11:25.790571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:162 cdw0:0 sqhd:00b9 p:1 m:0 dnr:0 00:39:10.724 [2024-11-27 10:11:25.962929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:173 nsid:1 lba:256696 len:8 PRP1 0x200004af0000 PRP2 0x0 00:39:10.724 [2024-11-27 10:11:25.962950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:173 cdw0:0 sqhd:0059 p:1 m:0 dnr:0 00:39:11.664 Initializing NVMe Controllers 00:39:11.664 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:11.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:11.664 Initialization complete. Launching workers. 00:39:11.664 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43430, failed: 2 00:39:11.664 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2692, failed to submit 40740 00:39:11.664 success 613, unsuccessful 2079, failed 0 00:39:11.664 10:11:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:11.664 10:11:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.664 10:11:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:11.664 10:11:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.664 10:11:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:11.664 10:11:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.664 10:11:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 11083 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 11083 ']' 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 11083 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 11083 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 11083' 00:39:13.575 killing process with pid 11083 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 11083 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 11083 00:39:13.575 00:39:13.575 real 0m12.135s 00:39:13.575 user 0m49.464s 00:39:13.575 sys 0m2.044s 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:13.575 ************************************ 00:39:13.575 END TEST spdk_target_abort 00:39:13.575 ************************************ 00:39:13.575 10:11:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:13.575 10:11:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:13.575 10:11:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:13.575 10:11:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:13.575 ************************************ 00:39:13.575 START TEST kernel_target_abort 00:39:13.575 ************************************ 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:13.575 10:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:16.874 Waiting for block devices as requested 00:39:16.875 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:17.135 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:17.135 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:17.135 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:17.135 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:17.395 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:17.395 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:17.395 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:17.657 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:17.657 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:17.917 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:17.917 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:17.917 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:18.177 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:18.177 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:18.177 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:18.437 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:18.697 10:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:18.697 10:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:18.697 10:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:39:18.697 10:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:39:18.697 10:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:18.697 10:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:18.697 10:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:39:18.698 10:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:18.698 10:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:18.698 No valid GPT data, bailing 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:39:18.698 00:39:18.698 Discovery Log Number of Records 2, Generation counter 2 00:39:18.698 =====Discovery Log Entry 0====== 00:39:18.698 trtype: tcp 00:39:18.698 adrfam: ipv4 00:39:18.698 subtype: current discovery subsystem 00:39:18.698 treq: not specified, sq flow control disable supported 00:39:18.698 portid: 1 00:39:18.698 trsvcid: 4420 00:39:18.698 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:18.698 traddr: 10.0.0.1 00:39:18.698 eflags: none 00:39:18.698 sectype: none 00:39:18.698 =====Discovery Log Entry 1====== 00:39:18.698 trtype: tcp 00:39:18.698 adrfam: ipv4 00:39:18.698 subtype: nvme subsystem 00:39:18.698 treq: not specified, sq flow control disable supported 00:39:18.698 portid: 1 00:39:18.698 trsvcid: 4420 00:39:18.698 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:18.698 traddr: 10.0.0.1 00:39:18.698 eflags: none 00:39:18.698 sectype: none 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:18.698 10:11:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:21.998 Initializing NVMe Controllers 00:39:21.999 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:21.999 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:21.999 Initialization complete. Launching workers. 00:39:21.999 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68104, failed: 0 00:39:21.999 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 68104, failed to submit 0 00:39:21.999 success 0, unsuccessful 68104, failed 0 00:39:21.999 10:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:21.999 10:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:25.301 Initializing NVMe Controllers 00:39:25.301 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:25.301 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:25.301 Initialization complete. Launching workers. 00:39:25.301 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 118763, failed: 0 00:39:25.301 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29898, failed to submit 88865 00:39:25.301 success 0, unsuccessful 29898, failed 0 00:39:25.301 10:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:25.301 10:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:28.601 Initializing NVMe Controllers 00:39:28.601 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:28.601 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:28.601 Initialization complete. Launching workers. 00:39:28.601 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145683, failed: 0 00:39:28.601 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36458, failed to submit 109225 00:39:28.601 success 0, unsuccessful 36458, failed 0 00:39:28.601 10:11:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:28.601 10:11:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:28.602 10:11:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:39:28.602 10:11:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:28.602 10:11:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:28.602 10:11:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:28.602 10:11:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:28.602 10:11:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:39:28.602 10:11:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:39:28.602 10:11:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:31.904 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:31.904 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:33.825 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:33.825 00:39:33.825 real 0m20.276s 00:39:33.825 user 0m9.904s 00:39:33.825 sys 0m6.073s 00:39:33.825 10:11:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:33.825 10:11:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:33.825 ************************************ 00:39:33.825 END TEST kernel_target_abort 00:39:33.825 ************************************ 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:33.825 rmmod nvme_tcp 00:39:33.825 rmmod nvme_fabrics 00:39:33.825 rmmod nvme_keyring 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 11083 ']' 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 11083 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 11083 ']' 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 11083 00:39:33.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (11083) - No such process 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 11083 is not found' 00:39:33.825 Process with pid 11083 is not found 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:33.825 10:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:37.140 Waiting for block devices as requested 00:39:37.401 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:37.401 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:37.401 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:37.699 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:37.699 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:37.699 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:37.699 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:37.981 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:37.981 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:38.243 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:38.243 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:38.243 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:38.503 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:38.503 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:38.503 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:38.764 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:38.764 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:39.025 10:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:39.025 10:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:39.025 10:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:39:39.025 10:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:39:39.025 10:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:39.025 10:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:39:39.025 10:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:39.025 10:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:39.025 10:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:39.025 10:11:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:39.025 10:11:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:41.568 10:11:56 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:41.568 00:39:41.568 real 0m52.255s 00:39:41.568 user 1m4.822s 00:39:41.568 sys 0m19.094s 00:39:41.568 10:11:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:41.568 10:11:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:41.568 ************************************ 00:39:41.568 END TEST nvmf_abort_qd_sizes 00:39:41.568 ************************************ 00:39:41.568 10:11:56 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:41.568 10:11:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:41.568 10:11:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:41.568 10:11:56 -- common/autotest_common.sh@10 -- # set +x 00:39:41.568 ************************************ 00:39:41.568 START TEST keyring_file 00:39:41.568 ************************************ 00:39:41.568 10:11:56 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:41.568 * Looking for test storage... 00:39:41.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:41.568 10:11:56 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:41.568 10:11:56 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:39:41.568 10:11:56 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:41.568 10:11:56 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:41.568 10:11:56 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:41.568 10:11:56 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:41.568 10:11:56 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:41.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.568 --rc genhtml_branch_coverage=1 00:39:41.568 --rc genhtml_function_coverage=1 00:39:41.568 --rc genhtml_legend=1 00:39:41.568 --rc geninfo_all_blocks=1 00:39:41.568 --rc geninfo_unexecuted_blocks=1 00:39:41.568 00:39:41.568 ' 00:39:41.568 10:11:56 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:41.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.568 --rc genhtml_branch_coverage=1 00:39:41.568 --rc genhtml_function_coverage=1 00:39:41.568 --rc genhtml_legend=1 00:39:41.568 --rc geninfo_all_blocks=1 00:39:41.568 --rc geninfo_unexecuted_blocks=1 00:39:41.568 00:39:41.568 ' 00:39:41.568 10:11:56 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:41.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.568 --rc genhtml_branch_coverage=1 00:39:41.568 --rc genhtml_function_coverage=1 00:39:41.568 --rc genhtml_legend=1 00:39:41.568 --rc geninfo_all_blocks=1 00:39:41.568 --rc geninfo_unexecuted_blocks=1 00:39:41.568 00:39:41.569 ' 00:39:41.569 10:11:56 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:41.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.569 --rc genhtml_branch_coverage=1 00:39:41.569 --rc genhtml_function_coverage=1 00:39:41.569 --rc genhtml_legend=1 00:39:41.569 --rc geninfo_all_blocks=1 00:39:41.569 --rc geninfo_unexecuted_blocks=1 00:39:41.569 00:39:41.569 ' 00:39:41.569 10:11:56 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:41.569 10:11:56 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:41.569 10:11:56 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:41.569 10:11:56 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:41.569 10:11:56 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:41.569 10:11:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.569 10:11:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.569 10:11:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.569 10:11:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:41.569 10:11:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:41.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:41.569 10:11:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:41.569 10:11:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:41.569 10:11:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:41.569 10:11:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:41.569 10:11:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:41.569 10:11:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rXQAry5rTk 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rXQAry5rTk 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rXQAry5rTk 00:39:41.569 10:11:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.rXQAry5rTk 00:39:41.569 10:11:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XP5WTNdAcw 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:41.569 10:11:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XP5WTNdAcw 00:39:41.569 10:11:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XP5WTNdAcw 00:39:41.569 10:11:56 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.XP5WTNdAcw 00:39:41.569 10:11:56 keyring_file -- keyring/file.sh@30 -- # tgtpid=21594 00:39:41.569 10:11:56 keyring_file -- keyring/file.sh@32 -- # waitforlisten 21594 00:39:41.569 10:11:56 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:41.569 10:11:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 21594 ']' 00:39:41.569 10:11:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:41.569 10:11:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:41.569 10:11:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:41.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:41.569 10:11:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:41.569 10:11:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:41.569 [2024-11-27 10:11:56.972637] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:39:41.569 [2024-11-27 10:11:56.972692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid21594 ] 00:39:41.830 [2024-11-27 10:11:57.059184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:41.830 [2024-11-27 10:11:57.096465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:42.401 10:11:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:42.401 [2024-11-27 10:11:57.757216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:42.401 null0 00:39:42.401 [2024-11-27 10:11:57.789267] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:42.401 [2024-11-27 10:11:57.789626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.401 10:11:57 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:42.401 [2024-11-27 10:11:57.821336] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:42.401 request: 00:39:42.401 { 00:39:42.401 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:42.401 "secure_channel": false, 00:39:42.401 "listen_address": { 00:39:42.401 "trtype": "tcp", 00:39:42.401 "traddr": "127.0.0.1", 00:39:42.401 "trsvcid": "4420" 00:39:42.401 }, 00:39:42.401 "method": "nvmf_subsystem_add_listener", 00:39:42.401 "req_id": 1 00:39:42.401 } 00:39:42.401 Got JSON-RPC error response 00:39:42.401 response: 00:39:42.401 { 00:39:42.401 "code": -32602, 00:39:42.401 "message": "Invalid parameters" 00:39:42.401 } 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:42.401 10:11:57 keyring_file -- keyring/file.sh@47 -- # bperfpid=21630 00:39:42.401 10:11:57 keyring_file -- keyring/file.sh@49 -- # waitforlisten 21630 /var/tmp/bperf.sock 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 21630 ']' 00:39:42.401 10:11:57 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:42.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:42.401 10:11:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:42.662 [2024-11-27 10:11:57.881089] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:39:42.662 [2024-11-27 10:11:57.881137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid21630 ] 00:39:42.662 [2024-11-27 10:11:57.967329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:42.662 [2024-11-27 10:11:58.004059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:43.236 10:11:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:43.236 10:11:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:43.236 10:11:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rXQAry5rTk 00:39:43.236 10:11:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rXQAry5rTk 00:39:43.498 10:11:58 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XP5WTNdAcw 00:39:43.498 10:11:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XP5WTNdAcw 00:39:43.759 10:11:59 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:43.759 10:11:59 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:43.759 10:11:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:43.759 10:11:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:43.759 10:11:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.020 10:11:59 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.rXQAry5rTk == \/\t\m\p\/\t\m\p\.\r\X\Q\A\r\y\5\r\T\k ]] 00:39:44.020 10:11:59 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:44.020 10:11:59 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:44.020 10:11:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.020 10:11:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:44.020 10:11:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.020 10:11:59 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.XP5WTNdAcw == \/\t\m\p\/\t\m\p\.\X\P\5\W\T\N\d\A\c\w ]] 00:39:44.020 10:11:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:44.020 10:11:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:44.020 10:11:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:44.020 10:11:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.020 10:11:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:44.020 10:11:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.280 10:11:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:44.280 10:11:59 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:44.280 10:11:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:44.280 10:11:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:44.280 10:11:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.280 10:11:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.280 10:11:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:44.541 10:11:59 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:44.541 10:11:59 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:44.541 10:11:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:44.541 [2024-11-27 10:11:59.951272] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:44.802 nvme0n1 00:39:44.802 10:12:00 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:44.802 10:12:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:44.802 10:12:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:44.802 10:12:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.802 10:12:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.802 10:12:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:44.802 10:12:00 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:44.802 10:12:00 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:44.802 10:12:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:44.802 10:12:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:44.802 10:12:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.802 10:12:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:44.802 10:12:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:45.063 10:12:00 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:45.063 10:12:00 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:45.063 Running I/O for 1 seconds... 00:39:46.447 20097.00 IOPS, 78.50 MiB/s 00:39:46.447 Latency(us) 00:39:46.447 [2024-11-27T09:12:01.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:46.447 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:46.447 nvme0n1 : 1.00 20146.82 78.70 0.00 0.00 6343.23 2430.29 9284.27 00:39:46.447 [2024-11-27T09:12:01.913Z] =================================================================================================================== 00:39:46.447 [2024-11-27T09:12:01.913Z] Total : 20146.82 78.70 0.00 0.00 6343.23 2430.29 9284.27 00:39:46.447 { 00:39:46.447 "results": [ 00:39:46.447 { 00:39:46.447 "job": "nvme0n1", 00:39:46.447 "core_mask": "0x2", 00:39:46.447 "workload": "randrw", 00:39:46.447 "percentage": 50, 00:39:46.447 "status": "finished", 00:39:46.447 "queue_depth": 128, 00:39:46.447 "io_size": 4096, 00:39:46.447 "runtime": 1.00393, 00:39:46.447 "iops": 20146.82298566633, 00:39:46.447 "mibps": 78.6985272877591, 00:39:46.447 "io_failed": 0, 00:39:46.447 "io_timeout": 0, 00:39:46.447 "avg_latency_us": 6343.228567849962, 00:39:46.447 "min_latency_us": 2430.2933333333335, 00:39:46.447 "max_latency_us": 9284.266666666666 00:39:46.447 } 00:39:46.447 ], 00:39:46.447 "core_count": 1 00:39:46.447 } 00:39:46.447 10:12:01 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:46.447 10:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:46.447 10:12:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:46.447 10:12:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:46.447 10:12:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:46.447 10:12:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:46.447 10:12:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:46.447 10:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:46.447 10:12:01 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:46.447 10:12:01 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:46.447 10:12:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:46.447 10:12:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:46.447 10:12:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:46.447 10:12:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:46.447 10:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:46.708 10:12:02 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:46.708 10:12:02 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:46.708 10:12:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:46.708 10:12:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:46.708 10:12:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:46.708 10:12:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:46.708 10:12:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:46.708 10:12:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:46.708 10:12:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:46.708 10:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:46.970 [2024-11-27 10:12:02.211771] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:46.970 [2024-11-27 10:12:02.212714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c18c0 (107): Transport endpoint is not connected 00:39:46.970 [2024-11-27 10:12:02.213710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c18c0 (9): Bad file descriptor 00:39:46.970 [2024-11-27 10:12:02.214711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:46.970 [2024-11-27 10:12:02.214719] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:46.970 [2024-11-27 10:12:02.214725] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:46.970 [2024-11-27 10:12:02.214731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:46.970 request: 00:39:46.970 { 00:39:46.970 "name": "nvme0", 00:39:46.970 "trtype": "tcp", 00:39:46.970 "traddr": "127.0.0.1", 00:39:46.970 "adrfam": "ipv4", 00:39:46.970 "trsvcid": "4420", 00:39:46.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:46.970 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:46.970 "prchk_reftag": false, 00:39:46.970 "prchk_guard": false, 00:39:46.970 "hdgst": false, 00:39:46.970 "ddgst": false, 00:39:46.970 "psk": "key1", 00:39:46.970 "allow_unrecognized_csi": false, 00:39:46.970 "method": "bdev_nvme_attach_controller", 00:39:46.970 "req_id": 1 00:39:46.970 } 00:39:46.970 Got JSON-RPC error response 00:39:46.970 response: 00:39:46.970 { 00:39:46.970 "code": -5, 00:39:46.970 "message": "Input/output error" 00:39:46.970 } 00:39:46.970 10:12:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:46.970 10:12:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:46.970 10:12:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:46.970 10:12:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:46.970 10:12:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:46.970 10:12:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:46.970 10:12:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:46.970 10:12:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:46.970 10:12:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:46.970 10:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:46.970 10:12:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:46.970 10:12:02 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:46.970 10:12:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:46.970 10:12:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:46.970 10:12:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:46.970 10:12:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:46.971 10:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:47.232 10:12:02 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:47.232 10:12:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:47.232 10:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:47.494 10:12:02 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:47.494 10:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:47.494 10:12:02 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:47.494 10:12:02 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:47.494 10:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:47.756 10:12:03 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:47.756 10:12:03 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.rXQAry5rTk 00:39:47.756 10:12:03 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.rXQAry5rTk 00:39:47.756 10:12:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:47.756 10:12:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.rXQAry5rTk 00:39:47.756 10:12:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:47.756 10:12:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:47.756 10:12:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:47.756 10:12:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:47.756 10:12:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rXQAry5rTk 00:39:47.756 10:12:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rXQAry5rTk 00:39:48.017 [2024-11-27 10:12:03.231367] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rXQAry5rTk': 0100660 00:39:48.017 [2024-11-27 10:12:03.231391] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:48.017 request: 00:39:48.017 { 00:39:48.017 "name": "key0", 00:39:48.017 "path": "/tmp/tmp.rXQAry5rTk", 00:39:48.017 "method": "keyring_file_add_key", 00:39:48.017 "req_id": 1 00:39:48.017 } 00:39:48.017 Got JSON-RPC error response 00:39:48.017 response: 00:39:48.017 { 00:39:48.017 "code": -1, 00:39:48.017 "message": "Operation not permitted" 00:39:48.017 } 00:39:48.017 10:12:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:48.017 10:12:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:48.017 10:12:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:48.017 10:12:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:48.017 10:12:03 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.rXQAry5rTk 00:39:48.017 10:12:03 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rXQAry5rTk 00:39:48.017 10:12:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rXQAry5rTk 00:39:48.017 10:12:03 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.rXQAry5rTk 00:39:48.017 10:12:03 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:48.017 10:12:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:48.017 10:12:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:48.017 10:12:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:48.017 10:12:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:48.017 10:12:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:48.278 10:12:03 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:48.278 10:12:03 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:48.278 10:12:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:48.278 10:12:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:48.278 10:12:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:48.278 10:12:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:48.278 10:12:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:48.278 10:12:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:48.278 10:12:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:48.279 10:12:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:48.540 [2024-11-27 10:12:03.796798] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.rXQAry5rTk': No such file or directory 00:39:48.540 [2024-11-27 10:12:03.796812] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:48.540 [2024-11-27 10:12:03.796824] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:48.540 [2024-11-27 10:12:03.796830] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:48.540 [2024-11-27 10:12:03.796835] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:48.540 [2024-11-27 10:12:03.796840] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:48.540 request: 00:39:48.540 { 00:39:48.540 "name": "nvme0", 00:39:48.540 "trtype": "tcp", 00:39:48.540 "traddr": "127.0.0.1", 00:39:48.540 "adrfam": "ipv4", 00:39:48.540 "trsvcid": "4420", 00:39:48.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:48.540 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:48.540 "prchk_reftag": false, 00:39:48.540 "prchk_guard": false, 00:39:48.540 "hdgst": false, 00:39:48.540 "ddgst": false, 00:39:48.540 "psk": "key0", 00:39:48.540 "allow_unrecognized_csi": false, 00:39:48.540 "method": "bdev_nvme_attach_controller", 00:39:48.540 "req_id": 1 00:39:48.540 } 00:39:48.540 Got JSON-RPC error response 00:39:48.540 response: 00:39:48.540 { 00:39:48.540 "code": -19, 00:39:48.540 "message": "No such device" 00:39:48.540 } 00:39:48.540 10:12:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:48.540 10:12:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:48.540 10:12:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:48.540 10:12:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:48.540 10:12:03 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:48.540 10:12:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:48.540 10:12:03 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:48.540 10:12:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:48.540 10:12:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:48.540 10:12:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:48.540 10:12:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:48.540 10:12:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:48.540 10:12:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PozAMGxMvN 00:39:48.540 10:12:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:48.540 10:12:03 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:48.540 10:12:03 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:48.540 10:12:03 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:48.540 10:12:03 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:48.540 10:12:03 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:48.540 10:12:03 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:48.803 10:12:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PozAMGxMvN 00:39:48.803 10:12:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PozAMGxMvN 00:39:48.803 10:12:04 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.PozAMGxMvN 00:39:48.803 10:12:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PozAMGxMvN 00:39:48.803 10:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PozAMGxMvN 00:39:48.803 10:12:04 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:48.803 10:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:49.064 nvme0n1 00:39:49.064 10:12:04 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:49.064 10:12:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:49.064 10:12:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:49.064 10:12:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:49.064 10:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.064 10:12:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:49.324 10:12:04 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:49.324 10:12:04 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:49.324 10:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:49.584 10:12:04 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:49.584 10:12:04 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:49.584 10:12:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:49.584 10:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.584 10:12:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:49.584 10:12:04 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:49.584 10:12:04 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:49.584 10:12:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:49.584 10:12:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:49.584 10:12:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:49.584 10:12:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:49.584 10:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.844 10:12:05 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:49.844 10:12:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:49.844 10:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:50.106 10:12:05 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:50.106 10:12:05 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:50.106 10:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.106 10:12:05 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:50.106 10:12:05 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PozAMGxMvN 00:39:50.106 10:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PozAMGxMvN 00:39:50.366 10:12:05 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XP5WTNdAcw 00:39:50.366 10:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XP5WTNdAcw 00:39:50.366 10:12:05 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:50.366 10:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:50.627 nvme0n1 00:39:50.627 10:12:06 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:50.627 10:12:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:50.888 10:12:06 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:50.888 "subsystems": [ 00:39:50.888 { 00:39:50.888 "subsystem": "keyring", 00:39:50.888 "config": [ 00:39:50.888 { 00:39:50.888 "method": "keyring_file_add_key", 00:39:50.888 "params": { 00:39:50.888 "name": "key0", 00:39:50.888 "path": "/tmp/tmp.PozAMGxMvN" 00:39:50.888 } 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "method": "keyring_file_add_key", 00:39:50.888 "params": { 00:39:50.888 "name": "key1", 00:39:50.888 "path": "/tmp/tmp.XP5WTNdAcw" 00:39:50.888 } 00:39:50.888 } 00:39:50.888 ] 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "subsystem": "iobuf", 00:39:50.888 "config": [ 00:39:50.888 { 00:39:50.888 "method": "iobuf_set_options", 00:39:50.888 "params": { 00:39:50.888 "small_pool_count": 8192, 00:39:50.888 "large_pool_count": 1024, 00:39:50.888 "small_bufsize": 8192, 00:39:50.888 "large_bufsize": 135168, 00:39:50.888 "enable_numa": false 00:39:50.888 } 00:39:50.888 } 00:39:50.888 ] 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "subsystem": "sock", 00:39:50.888 "config": [ 00:39:50.888 { 00:39:50.888 "method": "sock_set_default_impl", 00:39:50.888 "params": { 00:39:50.888 "impl_name": "posix" 00:39:50.888 } 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "method": "sock_impl_set_options", 00:39:50.888 "params": { 00:39:50.888 "impl_name": "ssl", 00:39:50.888 "recv_buf_size": 4096, 00:39:50.888 "send_buf_size": 4096, 00:39:50.888 "enable_recv_pipe": true, 00:39:50.888 "enable_quickack": false, 00:39:50.888 "enable_placement_id": 0, 00:39:50.888 "enable_zerocopy_send_server": true, 00:39:50.888 "enable_zerocopy_send_client": false, 00:39:50.888 "zerocopy_threshold": 0, 00:39:50.888 "tls_version": 0, 00:39:50.888 "enable_ktls": false 00:39:50.888 } 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "method": "sock_impl_set_options", 00:39:50.888 "params": { 00:39:50.888 "impl_name": "posix", 00:39:50.888 "recv_buf_size": 2097152, 00:39:50.888 "send_buf_size": 2097152, 00:39:50.888 "enable_recv_pipe": true, 00:39:50.888 "enable_quickack": false, 00:39:50.888 "enable_placement_id": 0, 00:39:50.888 "enable_zerocopy_send_server": true, 00:39:50.888 "enable_zerocopy_send_client": false, 00:39:50.888 "zerocopy_threshold": 0, 00:39:50.888 "tls_version": 0, 00:39:50.888 "enable_ktls": false 00:39:50.888 } 00:39:50.888 } 00:39:50.888 ] 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "subsystem": "vmd", 00:39:50.888 "config": [] 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "subsystem": "accel", 00:39:50.888 "config": [ 00:39:50.888 { 00:39:50.888 "method": "accel_set_options", 00:39:50.888 "params": { 00:39:50.888 "small_cache_size": 128, 00:39:50.888 "large_cache_size": 16, 00:39:50.888 "task_count": 2048, 00:39:50.888 "sequence_count": 2048, 00:39:50.888 "buf_count": 2048 00:39:50.888 } 00:39:50.888 } 00:39:50.888 ] 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "subsystem": "bdev", 00:39:50.888 "config": [ 00:39:50.888 { 00:39:50.888 "method": "bdev_set_options", 00:39:50.888 "params": { 00:39:50.888 "bdev_io_pool_size": 65535, 00:39:50.888 "bdev_io_cache_size": 256, 00:39:50.888 "bdev_auto_examine": true, 00:39:50.888 "iobuf_small_cache_size": 128, 00:39:50.888 "iobuf_large_cache_size": 16 00:39:50.888 } 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "method": "bdev_raid_set_options", 00:39:50.888 "params": { 00:39:50.888 "process_window_size_kb": 1024, 00:39:50.888 "process_max_bandwidth_mb_sec": 0 00:39:50.888 } 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "method": "bdev_iscsi_set_options", 00:39:50.888 "params": { 00:39:50.888 "timeout_sec": 30 00:39:50.888 } 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "method": "bdev_nvme_set_options", 00:39:50.888 "params": { 00:39:50.888 "action_on_timeout": "none", 00:39:50.888 "timeout_us": 0, 00:39:50.888 "timeout_admin_us": 0, 00:39:50.888 "keep_alive_timeout_ms": 10000, 00:39:50.888 "arbitration_burst": 0, 00:39:50.888 "low_priority_weight": 0, 00:39:50.888 "medium_priority_weight": 0, 00:39:50.888 "high_priority_weight": 0, 00:39:50.888 "nvme_adminq_poll_period_us": 10000, 00:39:50.888 "nvme_ioq_poll_period_us": 0, 00:39:50.888 "io_queue_requests": 512, 00:39:50.888 "delay_cmd_submit": true, 00:39:50.888 "transport_retry_count": 4, 00:39:50.888 "bdev_retry_count": 3, 00:39:50.888 "transport_ack_timeout": 0, 00:39:50.888 "ctrlr_loss_timeout_sec": 0, 00:39:50.888 "reconnect_delay_sec": 0, 00:39:50.888 "fast_io_fail_timeout_sec": 0, 00:39:50.888 "disable_auto_failback": false, 00:39:50.888 "generate_uuids": false, 00:39:50.888 "transport_tos": 0, 00:39:50.888 "nvme_error_stat": false, 00:39:50.888 "rdma_srq_size": 0, 00:39:50.888 "io_path_stat": false, 00:39:50.888 "allow_accel_sequence": false, 00:39:50.888 "rdma_max_cq_size": 0, 00:39:50.888 "rdma_cm_event_timeout_ms": 0, 00:39:50.888 "dhchap_digests": [ 00:39:50.888 "sha256", 00:39:50.888 "sha384", 00:39:50.888 "sha512" 00:39:50.888 ], 00:39:50.888 "dhchap_dhgroups": [ 00:39:50.888 "null", 00:39:50.888 "ffdhe2048", 00:39:50.888 "ffdhe3072", 00:39:50.888 "ffdhe4096", 00:39:50.888 "ffdhe6144", 00:39:50.888 "ffdhe8192" 00:39:50.888 ] 00:39:50.888 } 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "method": "bdev_nvme_attach_controller", 00:39:50.888 "params": { 00:39:50.888 "name": "nvme0", 00:39:50.888 "trtype": "TCP", 00:39:50.888 "adrfam": "IPv4", 00:39:50.888 "traddr": "127.0.0.1", 00:39:50.888 "trsvcid": "4420", 00:39:50.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:50.888 "prchk_reftag": false, 00:39:50.888 "prchk_guard": false, 00:39:50.888 "ctrlr_loss_timeout_sec": 0, 00:39:50.888 "reconnect_delay_sec": 0, 00:39:50.888 "fast_io_fail_timeout_sec": 0, 00:39:50.888 "psk": "key0", 00:39:50.888 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:50.888 "hdgst": false, 00:39:50.888 "ddgst": false, 00:39:50.888 "multipath": "multipath" 00:39:50.888 } 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "method": "bdev_nvme_set_hotplug", 00:39:50.888 "params": { 00:39:50.888 "period_us": 100000, 00:39:50.888 "enable": false 00:39:50.888 } 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "method": "bdev_wait_for_examine" 00:39:50.888 } 00:39:50.888 ] 00:39:50.888 }, 00:39:50.888 { 00:39:50.888 "subsystem": "nbd", 00:39:50.888 "config": [] 00:39:50.888 } 00:39:50.888 ] 00:39:50.888 }' 00:39:50.888 10:12:06 keyring_file -- keyring/file.sh@115 -- # killprocess 21630 00:39:50.888 10:12:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 21630 ']' 00:39:50.888 10:12:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 21630 00:39:50.888 10:12:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:50.888 10:12:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:50.888 10:12:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 21630 00:39:51.150 10:12:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:51.150 10:12:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:51.150 10:12:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 21630' 00:39:51.150 killing process with pid 21630 00:39:51.150 10:12:06 keyring_file -- common/autotest_common.sh@973 -- # kill 21630 00:39:51.150 Received shutdown signal, test time was about 1.000000 seconds 00:39:51.150 00:39:51.150 Latency(us) 00:39:51.150 [2024-11-27T09:12:06.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.150 [2024-11-27T09:12:06.616Z] =================================================================================================================== 00:39:51.150 [2024-11-27T09:12:06.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:51.150 10:12:06 keyring_file -- common/autotest_common.sh@978 -- # wait 21630 00:39:51.150 10:12:06 keyring_file -- keyring/file.sh@118 -- # bperfpid=23544 00:39:51.150 10:12:06 keyring_file -- keyring/file.sh@120 -- # waitforlisten 23544 /var/tmp/bperf.sock 00:39:51.150 10:12:06 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 23544 ']' 00:39:51.150 10:12:06 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:51.150 10:12:06 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:51.150 10:12:06 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:51.150 10:12:06 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:51.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:51.150 10:12:06 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:51.150 10:12:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:51.150 10:12:06 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:51.150 "subsystems": [ 00:39:51.150 { 00:39:51.150 "subsystem": "keyring", 00:39:51.150 "config": [ 00:39:51.150 { 00:39:51.150 "method": "keyring_file_add_key", 00:39:51.150 "params": { 00:39:51.150 "name": "key0", 00:39:51.150 "path": "/tmp/tmp.PozAMGxMvN" 00:39:51.150 } 00:39:51.150 }, 00:39:51.150 { 00:39:51.150 "method": "keyring_file_add_key", 00:39:51.150 "params": { 00:39:51.150 "name": "key1", 00:39:51.150 "path": "/tmp/tmp.XP5WTNdAcw" 00:39:51.150 } 00:39:51.150 } 00:39:51.150 ] 00:39:51.150 }, 00:39:51.150 { 00:39:51.150 "subsystem": "iobuf", 00:39:51.150 "config": [ 00:39:51.150 { 00:39:51.150 "method": "iobuf_set_options", 00:39:51.150 "params": { 00:39:51.150 "small_pool_count": 8192, 00:39:51.150 "large_pool_count": 1024, 00:39:51.150 "small_bufsize": 8192, 00:39:51.150 "large_bufsize": 135168, 00:39:51.150 "enable_numa": false 00:39:51.150 } 00:39:51.150 } 00:39:51.150 ] 00:39:51.150 }, 00:39:51.150 { 00:39:51.150 "subsystem": "sock", 00:39:51.150 "config": [ 00:39:51.150 { 00:39:51.150 "method": "sock_set_default_impl", 00:39:51.150 "params": { 00:39:51.150 "impl_name": "posix" 00:39:51.150 } 00:39:51.150 }, 00:39:51.150 { 00:39:51.150 "method": "sock_impl_set_options", 00:39:51.150 "params": { 00:39:51.150 "impl_name": "ssl", 00:39:51.150 "recv_buf_size": 4096, 00:39:51.150 "send_buf_size": 4096, 00:39:51.150 "enable_recv_pipe": true, 00:39:51.150 "enable_quickack": false, 00:39:51.150 "enable_placement_id": 0, 00:39:51.150 "enable_zerocopy_send_server": true, 00:39:51.150 "enable_zerocopy_send_client": false, 00:39:51.150 "zerocopy_threshold": 0, 00:39:51.150 "tls_version": 0, 00:39:51.150 "enable_ktls": false 00:39:51.150 } 00:39:51.150 }, 00:39:51.150 { 00:39:51.150 "method": "sock_impl_set_options", 00:39:51.150 "params": { 00:39:51.150 "impl_name": "posix", 00:39:51.150 "recv_buf_size": 2097152, 00:39:51.150 "send_buf_size": 2097152, 00:39:51.150 "enable_recv_pipe": true, 00:39:51.150 "enable_quickack": false, 00:39:51.150 "enable_placement_id": 0, 00:39:51.150 "enable_zerocopy_send_server": true, 00:39:51.150 "enable_zerocopy_send_client": false, 00:39:51.150 "zerocopy_threshold": 0, 00:39:51.150 "tls_version": 0, 00:39:51.150 "enable_ktls": false 00:39:51.150 } 00:39:51.150 } 00:39:51.150 ] 00:39:51.150 }, 00:39:51.150 { 00:39:51.150 "subsystem": "vmd", 00:39:51.150 "config": [] 00:39:51.150 }, 00:39:51.150 { 00:39:51.150 "subsystem": "accel", 00:39:51.150 "config": [ 00:39:51.150 { 00:39:51.150 "method": "accel_set_options", 00:39:51.150 "params": { 00:39:51.150 "small_cache_size": 128, 00:39:51.150 "large_cache_size": 16, 00:39:51.150 "task_count": 2048, 00:39:51.150 "sequence_count": 2048, 00:39:51.150 "buf_count": 2048 00:39:51.150 } 00:39:51.150 } 00:39:51.150 ] 00:39:51.150 }, 00:39:51.150 { 00:39:51.150 "subsystem": "bdev", 00:39:51.150 "config": [ 00:39:51.150 { 00:39:51.150 "method": "bdev_set_options", 00:39:51.150 "params": { 00:39:51.150 "bdev_io_pool_size": 65535, 00:39:51.150 "bdev_io_cache_size": 256, 00:39:51.150 "bdev_auto_examine": true, 00:39:51.151 "iobuf_small_cache_size": 128, 00:39:51.151 "iobuf_large_cache_size": 16 00:39:51.151 } 00:39:51.151 }, 00:39:51.151 { 00:39:51.151 "method": "bdev_raid_set_options", 00:39:51.151 "params": { 00:39:51.151 "process_window_size_kb": 1024, 00:39:51.151 "process_max_bandwidth_mb_sec": 0 00:39:51.151 } 00:39:51.151 }, 00:39:51.151 { 00:39:51.151 "method": "bdev_iscsi_set_options", 00:39:51.151 "params": { 00:39:51.151 "timeout_sec": 30 00:39:51.151 } 00:39:51.151 }, 00:39:51.151 { 00:39:51.151 "method": "bdev_nvme_set_options", 00:39:51.151 "params": { 00:39:51.151 "action_on_timeout": "none", 00:39:51.151 "timeout_us": 0, 00:39:51.151 "timeout_admin_us": 0, 00:39:51.151 "keep_alive_timeout_ms": 10000, 00:39:51.151 "arbitration_burst": 0, 00:39:51.151 "low_priority_weight": 0, 00:39:51.151 "medium_priority_weight": 0, 00:39:51.151 "high_priority_weight": 0, 00:39:51.151 "nvme_adminq_poll_period_us": 10000, 00:39:51.151 "nvme_ioq_poll_period_us": 0, 00:39:51.151 "io_queue_requests": 512, 00:39:51.151 "delay_cmd_submit": true, 00:39:51.151 "transport_retry_count": 4, 00:39:51.151 "bdev_retry_count": 3, 00:39:51.151 "transport_ack_timeout": 0, 00:39:51.151 "ctrlr_loss_timeout_sec": 0, 00:39:51.151 "reconnect_delay_sec": 0, 00:39:51.151 "fast_io_fail_timeout_sec": 0, 00:39:51.151 "disable_auto_failback": false, 00:39:51.151 "generate_uuids": false, 00:39:51.151 "transport_tos": 0, 00:39:51.151 "nvme_error_stat": false, 00:39:51.151 "rdma_srq_size": 0, 00:39:51.151 "io_path_stat": false, 00:39:51.151 "allow_accel_sequence": false, 00:39:51.151 "rdma_max_cq_size": 0, 00:39:51.151 "rdma_cm_event_timeout_ms": 0, 00:39:51.151 "dhchap_digests": [ 00:39:51.151 "sha256", 00:39:51.151 "sha384", 00:39:51.151 "sha512" 00:39:51.151 ], 00:39:51.151 "dhchap_dhgroups": [ 00:39:51.151 "null", 00:39:51.151 "ffdhe2048", 00:39:51.151 "ffdhe3072", 00:39:51.151 "ffdhe4096", 00:39:51.151 "ffdhe6144", 00:39:51.151 "ffdhe8192" 00:39:51.151 ] 00:39:51.151 } 00:39:51.151 }, 00:39:51.151 { 00:39:51.151 "method": "bdev_nvme_attach_controller", 00:39:51.151 "params": { 00:39:51.151 "name": "nvme0", 00:39:51.151 "trtype": "TCP", 00:39:51.151 "adrfam": "IPv4", 00:39:51.151 "traddr": "127.0.0.1", 00:39:51.151 "trsvcid": "4420", 00:39:51.151 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:51.151 "prchk_reftag": false, 00:39:51.151 "prchk_guard": false, 00:39:51.151 "ctrlr_loss_timeout_sec": 0, 00:39:51.151 "reconnect_delay_sec": 0, 00:39:51.151 "fast_io_fail_timeout_sec": 0, 00:39:51.151 "psk": "key0", 00:39:51.151 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:51.151 "hdgst": false, 00:39:51.151 "ddgst": false, 00:39:51.151 "multipath": "multipath" 00:39:51.151 } 00:39:51.151 }, 00:39:51.151 { 00:39:51.151 "method": "bdev_nvme_set_hotplug", 00:39:51.151 "params": { 00:39:51.151 "period_us": 100000, 00:39:51.151 "enable": false 00:39:51.151 } 00:39:51.151 }, 00:39:51.151 { 00:39:51.151 "method": "bdev_wait_for_examine" 00:39:51.151 } 00:39:51.151 ] 00:39:51.151 }, 00:39:51.151 { 00:39:51.151 "subsystem": "nbd", 00:39:51.151 "config": [] 00:39:51.151 } 00:39:51.151 ] 00:39:51.151 }' 00:39:51.151 [2024-11-27 10:12:06.531421] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:39:51.151 [2024-11-27 10:12:06.531478] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid23544 ] 00:39:51.151 [2024-11-27 10:12:06.614155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.411 [2024-11-27 10:12:06.642471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:51.411 [2024-11-27 10:12:06.785260] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:51.981 10:12:07 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:51.981 10:12:07 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:51.981 10:12:07 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:51.981 10:12:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:51.981 10:12:07 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:52.242 10:12:07 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:52.242 10:12:07 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:52.242 10:12:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:52.242 10:12:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:52.242 10:12:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:52.242 10:12:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:52.242 10:12:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:52.501 10:12:07 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:52.501 10:12:07 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:52.501 10:12:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:52.501 10:12:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:52.501 10:12:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:52.502 10:12:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:52.502 10:12:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:52.502 10:12:07 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:52.502 10:12:07 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:52.502 10:12:07 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:52.502 10:12:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:52.762 10:12:08 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:52.762 10:12:08 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:52.762 10:12:08 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.PozAMGxMvN /tmp/tmp.XP5WTNdAcw 00:39:52.762 10:12:08 keyring_file -- keyring/file.sh@20 -- # killprocess 23544 00:39:52.762 10:12:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 23544 ']' 00:39:52.762 10:12:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 23544 00:39:52.762 10:12:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:52.762 10:12:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:52.762 10:12:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 23544 00:39:52.762 10:12:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:52.762 10:12:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:52.762 10:12:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 23544' 00:39:52.762 killing process with pid 23544 00:39:52.762 10:12:08 keyring_file -- common/autotest_common.sh@973 -- # kill 23544 00:39:52.762 Received shutdown signal, test time was about 1.000000 seconds 00:39:52.762 00:39:52.762 Latency(us) 00:39:52.762 [2024-11-27T09:12:08.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:52.762 [2024-11-27T09:12:08.228Z] =================================================================================================================== 00:39:52.762 [2024-11-27T09:12:08.228Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:52.762 10:12:08 keyring_file -- common/autotest_common.sh@978 -- # wait 23544 00:39:53.022 10:12:08 keyring_file -- keyring/file.sh@21 -- # killprocess 21594 00:39:53.022 10:12:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 21594 ']' 00:39:53.022 10:12:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 21594 00:39:53.022 10:12:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:53.022 10:12:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:53.022 10:12:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 21594 00:39:53.022 10:12:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:53.022 10:12:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:53.022 10:12:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 21594' 00:39:53.022 killing process with pid 21594 00:39:53.022 10:12:08 keyring_file -- common/autotest_common.sh@973 -- # kill 21594 00:39:53.022 10:12:08 keyring_file -- common/autotest_common.sh@978 -- # wait 21594 00:39:53.282 00:39:53.282 real 0m11.938s 00:39:53.282 user 0m28.919s 00:39:53.282 sys 0m2.630s 00:39:53.282 10:12:08 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:53.282 10:12:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:53.282 ************************************ 00:39:53.282 END TEST keyring_file 00:39:53.282 ************************************ 00:39:53.282 10:12:08 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:39:53.282 10:12:08 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:53.282 10:12:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:53.282 10:12:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:53.282 10:12:08 -- common/autotest_common.sh@10 -- # set +x 00:39:53.282 ************************************ 00:39:53.282 START TEST keyring_linux 00:39:53.282 ************************************ 00:39:53.282 10:12:08 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:53.282 Joined session keyring: 825208181 00:39:53.282 * Looking for test storage... 00:39:53.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:53.282 10:12:08 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:53.282 10:12:08 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:39:53.282 10:12:08 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:53.543 10:12:08 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:53.543 10:12:08 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:53.543 10:12:08 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:53.543 10:12:08 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:53.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.543 --rc genhtml_branch_coverage=1 00:39:53.543 --rc genhtml_function_coverage=1 00:39:53.543 --rc genhtml_legend=1 00:39:53.543 --rc geninfo_all_blocks=1 00:39:53.543 --rc geninfo_unexecuted_blocks=1 00:39:53.543 00:39:53.543 ' 00:39:53.543 10:12:08 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:53.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.544 --rc genhtml_branch_coverage=1 00:39:53.544 --rc genhtml_function_coverage=1 00:39:53.544 --rc genhtml_legend=1 00:39:53.544 --rc geninfo_all_blocks=1 00:39:53.544 --rc geninfo_unexecuted_blocks=1 00:39:53.544 00:39:53.544 ' 00:39:53.544 10:12:08 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:53.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.544 --rc genhtml_branch_coverage=1 00:39:53.544 --rc genhtml_function_coverage=1 00:39:53.544 --rc genhtml_legend=1 00:39:53.544 --rc geninfo_all_blocks=1 00:39:53.544 --rc geninfo_unexecuted_blocks=1 00:39:53.544 00:39:53.544 ' 00:39:53.544 10:12:08 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:53.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.544 --rc genhtml_branch_coverage=1 00:39:53.544 --rc genhtml_function_coverage=1 00:39:53.544 --rc genhtml_legend=1 00:39:53.544 --rc geninfo_all_blocks=1 00:39:53.544 --rc geninfo_unexecuted_blocks=1 00:39:53.544 00:39:53.544 ' 00:39:53.544 10:12:08 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:53.544 10:12:08 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:53.544 10:12:08 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:53.544 10:12:08 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:53.544 10:12:08 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:53.544 10:12:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.544 10:12:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.544 10:12:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.544 10:12:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:53.544 10:12:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:53.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:53.544 10:12:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:53.544 10:12:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:53.544 10:12:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:53.544 10:12:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:53.544 10:12:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:53.544 10:12:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:53.544 /tmp/:spdk-test:key0 00:39:53.544 10:12:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:53.544 10:12:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:53.544 10:12:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:53.544 /tmp/:spdk-test:key1 00:39:53.544 10:12:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=24025 00:39:53.544 10:12:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 24025 00:39:53.544 10:12:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:53.544 10:12:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 24025 ']' 00:39:53.544 10:12:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:53.544 10:12:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:53.544 10:12:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:53.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:53.544 10:12:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:53.544 10:12:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:53.544 [2024-11-27 10:12:08.987421] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:39:53.544 [2024-11-27 10:12:08.987501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid24025 ] 00:39:53.804 [2024-11-27 10:12:09.075420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:53.804 [2024-11-27 10:12:09.111461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.373 10:12:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:54.373 10:12:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:54.373 10:12:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:54.373 10:12:09 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.373 10:12:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:54.373 [2024-11-27 10:12:09.776580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:54.373 null0 00:39:54.373 [2024-11-27 10:12:09.808640] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:54.373 [2024-11-27 10:12:09.808991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:54.373 10:12:09 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.373 10:12:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:54.373 807917644 00:39:54.373 10:12:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:54.373 223856374 00:39:54.373 10:12:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=24562 00:39:54.373 10:12:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 24562 /var/tmp/bperf.sock 00:39:54.373 10:12:09 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:54.373 10:12:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 24562 ']' 00:39:54.634 10:12:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:54.634 10:12:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:54.635 10:12:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:54.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:54.635 10:12:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:54.635 10:12:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:54.635 [2024-11-27 10:12:09.885355] Starting SPDK v25.01-pre git sha1 c25d82eb4 / DPDK 24.03.0 initialization... 00:39:54.635 [2024-11-27 10:12:09.885403] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid24562 ] 00:39:54.635 [2024-11-27 10:12:09.965850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:54.635 [2024-11-27 10:12:09.995774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:55.577 10:12:10 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:55.577 10:12:10 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:55.577 10:12:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:55.577 10:12:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:55.577 10:12:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:55.577 10:12:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:55.577 10:12:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:55.577 10:12:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:55.838 [2024-11-27 10:12:11.191780] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:55.838 nvme0n1 00:39:55.838 10:12:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:55.838 10:12:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:55.838 10:12:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:55.838 10:12:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:55.838 10:12:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:55.838 10:12:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:56.098 10:12:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:56.098 10:12:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:56.098 10:12:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:56.099 10:12:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:56.099 10:12:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:56.099 10:12:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:56.099 10:12:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:56.359 10:12:11 keyring_linux -- keyring/linux.sh@25 -- # sn=807917644 00:39:56.359 10:12:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:56.359 10:12:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:56.359 10:12:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 807917644 == \8\0\7\9\1\7\6\4\4 ]] 00:39:56.359 10:12:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 807917644 00:39:56.359 10:12:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:56.359 10:12:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:56.359 Running I/O for 1 seconds... 00:39:57.301 24509.00 IOPS, 95.74 MiB/s 00:39:57.301 Latency(us) 00:39:57.301 [2024-11-27T09:12:12.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:57.301 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:57.301 nvme0n1 : 1.01 24508.74 95.74 0.00 0.00 5206.90 1774.93 6471.68 00:39:57.301 [2024-11-27T09:12:12.767Z] =================================================================================================================== 00:39:57.301 [2024-11-27T09:12:12.767Z] Total : 24508.74 95.74 0.00 0.00 5206.90 1774.93 6471.68 00:39:57.301 { 00:39:57.301 "results": [ 00:39:57.301 { 00:39:57.301 "job": "nvme0n1", 00:39:57.301 "core_mask": "0x2", 00:39:57.301 "workload": "randread", 00:39:57.301 "status": "finished", 00:39:57.301 "queue_depth": 128, 00:39:57.301 "io_size": 4096, 00:39:57.301 "runtime": 1.005274, 00:39:57.301 "iops": 24508.74090049081, 00:39:57.301 "mibps": 95.73726914254223, 00:39:57.301 "io_failed": 0, 00:39:57.301 "io_timeout": 0, 00:39:57.301 "avg_latency_us": 5206.902707470845, 00:39:57.301 "min_latency_us": 1774.9333333333334, 00:39:57.301 "max_latency_us": 6471.68 00:39:57.301 } 00:39:57.301 ], 00:39:57.301 "core_count": 1 00:39:57.301 } 00:39:57.301 10:12:12 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:57.301 10:12:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:57.562 10:12:12 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:57.562 10:12:12 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:57.562 10:12:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:57.562 10:12:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:57.562 10:12:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:57.562 10:12:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:57.823 10:12:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:57.823 10:12:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:57.823 10:12:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:57.823 10:12:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:57.823 10:12:13 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:39:57.823 10:12:13 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:57.823 10:12:13 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:57.823 10:12:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:57.823 10:12:13 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:57.823 10:12:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:57.823 10:12:13 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:57.823 10:12:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:58.083 [2024-11-27 10:12:13.307602] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:58.083 [2024-11-27 10:12:13.308402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc754f0 (107): Transport endpoint is not connected 00:39:58.083 [2024-11-27 10:12:13.309398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc754f0 (9): Bad file descriptor 00:39:58.083 [2024-11-27 10:12:13.310400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:58.083 [2024-11-27 10:12:13.310408] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:58.083 [2024-11-27 10:12:13.310413] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:58.083 [2024-11-27 10:12:13.310419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:58.083 request: 00:39:58.083 { 00:39:58.083 "name": "nvme0", 00:39:58.083 "trtype": "tcp", 00:39:58.083 "traddr": "127.0.0.1", 00:39:58.083 "adrfam": "ipv4", 00:39:58.083 "trsvcid": "4420", 00:39:58.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:58.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:58.083 "prchk_reftag": false, 00:39:58.083 "prchk_guard": false, 00:39:58.083 "hdgst": false, 00:39:58.083 "ddgst": false, 00:39:58.083 "psk": ":spdk-test:key1", 00:39:58.083 "allow_unrecognized_csi": false, 00:39:58.083 "method": "bdev_nvme_attach_controller", 00:39:58.083 "req_id": 1 00:39:58.083 } 00:39:58.083 Got JSON-RPC error response 00:39:58.084 response: 00:39:58.084 { 00:39:58.084 "code": -5, 00:39:58.084 "message": "Input/output error" 00:39:58.084 } 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@33 -- # sn=807917644 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 807917644 00:39:58.084 1 links removed 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@33 -- # sn=223856374 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 223856374 00:39:58.084 1 links removed 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 24562 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 24562 ']' 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 24562 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 24562 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 24562' 00:39:58.084 killing process with pid 24562 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@973 -- # kill 24562 00:39:58.084 Received shutdown signal, test time was about 1.000000 seconds 00:39:58.084 00:39:58.084 Latency(us) 00:39:58.084 [2024-11-27T09:12:13.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:58.084 [2024-11-27T09:12:13.550Z] =================================================================================================================== 00:39:58.084 [2024-11-27T09:12:13.550Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@978 -- # wait 24562 00:39:58.084 10:12:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 24025 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 24025 ']' 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 24025 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:58.084 10:12:13 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 24025 00:39:58.345 10:12:13 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:58.345 10:12:13 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:58.345 10:12:13 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 24025' 00:39:58.345 killing process with pid 24025 00:39:58.345 10:12:13 keyring_linux -- common/autotest_common.sh@973 -- # kill 24025 00:39:58.345 10:12:13 keyring_linux -- common/autotest_common.sh@978 -- # wait 24025 00:39:58.345 00:39:58.345 real 0m5.178s 00:39:58.345 user 0m9.575s 00:39:58.345 sys 0m1.482s 00:39:58.345 10:12:13 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:58.345 10:12:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:58.345 ************************************ 00:39:58.345 END TEST keyring_linux 00:39:58.345 ************************************ 00:39:58.345 10:12:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:58.345 10:12:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:58.345 10:12:13 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:39:58.345 10:12:13 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:39:58.345 10:12:13 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:39:58.345 10:12:13 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:58.345 10:12:13 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:58.345 10:12:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:58.345 10:12:13 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:39:58.345 10:12:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:58.345 10:12:13 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:39:58.345 10:12:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:58.345 10:12:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:58.345 10:12:13 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:39:58.345 10:12:13 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:39:58.345 10:12:13 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:39:58.345 10:12:13 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:39:58.345 10:12:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:58.345 10:12:13 -- common/autotest_common.sh@10 -- # set +x 00:39:58.606 10:12:13 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:39:58.606 10:12:13 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:39:58.606 10:12:13 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:39:58.606 10:12:13 -- common/autotest_common.sh@10 -- # set +x 00:40:06.744 INFO: APP EXITING 00:40:06.744 INFO: killing all VMs 00:40:06.744 INFO: killing vhost app 00:40:06.744 WARN: no vhost pid file found 00:40:06.744 INFO: EXIT DONE 00:40:09.315 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:40:09.315 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:40:09.315 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:40:09.315 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:40:09.315 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:40:09.575 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:40:09.575 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:40:09.575 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:40:09.575 0000:65:00.0 (144d a80a): Already using the nvme driver 00:40:09.575 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:40:09.575 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:40:09.575 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:40:09.575 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:40:09.575 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:40:09.575 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:40:09.836 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:40:09.836 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:40:14.066 Cleaning 00:40:14.066 Removing: /var/run/dpdk/spdk0/config 00:40:14.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:14.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:14.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:14.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:14.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:14.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:14.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:14.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:14.066 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:14.066 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:14.066 Removing: /var/run/dpdk/spdk1/config 00:40:14.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:14.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:14.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:14.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:14.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:14.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:14.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:14.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:14.066 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:14.066 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:14.066 Removing: /var/run/dpdk/spdk2/config 00:40:14.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:14.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:14.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:14.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:14.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:14.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:14.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:14.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:14.066 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:14.066 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:14.066 Removing: /var/run/dpdk/spdk3/config 00:40:14.066 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:14.066 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:14.066 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:14.066 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:14.066 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:14.066 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:14.066 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:14.066 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:14.066 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:14.066 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:14.066 Removing: /var/run/dpdk/spdk4/config 00:40:14.066 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:14.066 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:14.066 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:14.066 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:14.066 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:14.066 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:14.066 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:14.066 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:14.066 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:14.066 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:14.066 Removing: /dev/shm/bdev_svc_trace.1 00:40:14.066 Removing: /dev/shm/nvmf_trace.0 00:40:14.066 Removing: /dev/shm/spdk_tgt_trace.pid3639021 00:40:14.066 Removing: /var/run/dpdk/spdk0 00:40:14.066 Removing: /var/run/dpdk/spdk1 00:40:14.066 Removing: /var/run/dpdk/spdk2 00:40:14.066 Removing: /var/run/dpdk/spdk3 00:40:14.066 Removing: /var/run/dpdk/spdk4 00:40:14.066 Removing: /var/run/dpdk/spdk_pid1130 00:40:14.066 Removing: /var/run/dpdk/spdk_pid11445 00:40:14.066 Removing: /var/run/dpdk/spdk_pid12052 00:40:14.066 Removing: /var/run/dpdk/spdk_pid12580 00:40:14.066 Removing: /var/run/dpdk/spdk_pid15399 00:40:14.066 Removing: /var/run/dpdk/spdk_pid16067 00:40:14.066 Removing: /var/run/dpdk/spdk_pid16708 00:40:14.066 Removing: /var/run/dpdk/spdk_pid21594 00:40:14.066 Removing: /var/run/dpdk/spdk_pid21630 00:40:14.066 Removing: /var/run/dpdk/spdk_pid23544 00:40:14.066 Removing: /var/run/dpdk/spdk_pid24025 00:40:14.066 Removing: /var/run/dpdk/spdk_pid24562 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3637529 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3639021 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3639870 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3640910 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3641251 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3642322 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3642558 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3642789 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3643926 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3644618 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3644969 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3645321 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3645660 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3646008 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3646359 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3646707 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3647063 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3648172 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3651717 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3652051 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3652378 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3652506 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3652923 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3653212 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3653590 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3653864 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3654131 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3654306 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3654580 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3654683 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3655192 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3655481 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3655918 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3661139 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3666362 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3678459 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3679250 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3684550 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3684909 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3690155 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3697230 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3700472 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3713452 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3724626 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3726643 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3727664 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3748663 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3753443 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3809978 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3816363 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3824103 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3832004 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3832012 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3833018 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3834024 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3835032 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3835699 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3835705 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3836037 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3836050 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3836145 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3837211 00:40:14.066 Removing: /var/run/dpdk/spdk_pid3838232 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3839324 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3839922 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3840047 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3840270 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3841653 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3842936 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3852862 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3887416 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3892818 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3894816 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3897048 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3897218 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3897530 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3897872 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3898596 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3900675 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3902022 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3902667 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3905720 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3906500 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3907429 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3912381 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3919053 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3919055 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3919056 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3923747 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3933984 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3938764 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3946196 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3947690 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3949481 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3951057 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3956823 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3962482 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3967509 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3976622 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3976738 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3981945 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3982157 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3982321 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3982982 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3982990 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3988438 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3989195 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3994603 00:40:14.067 Removing: /var/run/dpdk/spdk_pid3997735 00:40:14.067 Removing: /var/run/dpdk/spdk_pid4004452 00:40:14.067 Removing: /var/run/dpdk/spdk_pid4011001 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4021804 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4030479 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4030483 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4053326 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4054032 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4054866 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4055654 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4056640 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4057415 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4058133 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4058820 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4064093 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4064332 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4072133 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4072323 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4078868 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4083998 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4095404 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4096122 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4101232 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4101638 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4106672 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4113526 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4117167 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4129337 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4140013 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4142027 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4143032 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4162633 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4167353 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4171098 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4178818 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4178889 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4184774 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4187078 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4189484 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4190806 00:40:14.329 Removing: /var/run/dpdk/spdk_pid4193197 00:40:14.329 Clean 00:40:14.590 10:12:29 -- common/autotest_common.sh@1453 -- # return 0 00:40:14.590 10:12:29 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:40:14.590 10:12:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:14.590 10:12:29 -- common/autotest_common.sh@10 -- # set +x 00:40:14.590 10:12:29 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:40:14.590 10:12:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:14.590 10:12:29 -- common/autotest_common.sh@10 -- # set +x 00:40:14.590 10:12:29 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:14.590 10:12:29 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:14.590 10:12:29 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:14.590 10:12:29 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:40:14.590 10:12:29 -- spdk/autotest.sh@398 -- # hostname 00:40:14.590 10:12:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:14.850 geninfo: WARNING: invalid characters removed from testname! 00:40:41.579 10:12:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:43.510 10:12:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:44.898 10:13:00 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:46.809 10:13:01 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:48.198 10:13:03 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:50.112 10:13:05 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:51.497 10:13:06 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:51.497 10:13:06 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:51.497 10:13:06 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:51.497 10:13:06 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:51.497 10:13:06 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:51.497 10:13:06 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:51.497 + [[ -n 3552445 ]] 00:40:51.497 + sudo kill 3552445 00:40:51.509 [Pipeline] } 00:40:51.525 [Pipeline] // stage 00:40:51.530 [Pipeline] } 00:40:51.545 [Pipeline] // timeout 00:40:51.550 [Pipeline] } 00:40:51.564 [Pipeline] // catchError 00:40:51.570 [Pipeline] } 00:40:51.584 [Pipeline] // wrap 00:40:51.591 [Pipeline] } 00:40:51.604 [Pipeline] // catchError 00:40:51.613 [Pipeline] stage 00:40:51.616 [Pipeline] { (Epilogue) 00:40:51.628 [Pipeline] catchError 00:40:51.630 [Pipeline] { 00:40:51.643 [Pipeline] echo 00:40:51.645 Cleanup processes 00:40:51.651 [Pipeline] sh 00:40:52.083 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:52.083 37781 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:52.099 [Pipeline] sh 00:40:52.388 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:52.388 ++ grep -v 'sudo pgrep' 00:40:52.388 ++ awk '{print $1}' 00:40:52.388 + sudo kill -9 00:40:52.388 + true 00:40:52.401 [Pipeline] sh 00:40:52.691 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:04.933 [Pipeline] sh 00:41:05.222 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:05.222 Artifacts sizes are good 00:41:05.238 [Pipeline] archiveArtifacts 00:41:05.246 Archiving artifacts 00:41:05.389 [Pipeline] sh 00:41:05.676 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:05.692 [Pipeline] cleanWs 00:41:05.703 [WS-CLEANUP] Deleting project workspace... 00:41:05.703 [WS-CLEANUP] Deferred wipeout is used... 00:41:05.710 [WS-CLEANUP] done 00:41:05.712 [Pipeline] } 00:41:05.730 [Pipeline] // catchError 00:41:05.742 [Pipeline] sh 00:41:06.030 + logger -p user.info -t JENKINS-CI 00:41:06.041 [Pipeline] } 00:41:06.055 [Pipeline] // stage 00:41:06.060 [Pipeline] } 00:41:06.073 [Pipeline] // node 00:41:06.079 [Pipeline] End of Pipeline 00:41:06.137 Finished: SUCCESS